00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4031 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3626 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.054 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.057 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.072 Fetching changes from the remote Git repository 00:00:00.074 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.097 Using shallow fetch with depth 1 00:00:00.097 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.097 > git --version # timeout=10 00:00:00.128 > git --version # 'git version 2.39.2' 00:00:00.128 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.166 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.166 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.841 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.853 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.866 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.866 > git config core.sparsecheckout # timeout=10 00:00:03.876 > git read-tree -mu HEAD # timeout=10 00:00:03.892 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.910 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.910 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:04.002 [Pipeline] Start of Pipeline 00:00:04.017 [Pipeline] library 00:00:04.019 Loading library shm_lib@master 00:00:04.019 Library shm_lib@master is cached. Copying from home. 00:00:04.032 [Pipeline] node 00:00:04.046 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:04.048 [Pipeline] { 00:00:04.057 [Pipeline] catchError 00:00:04.059 [Pipeline] { 00:00:04.072 [Pipeline] wrap 00:00:04.081 [Pipeline] { 00:00:04.090 [Pipeline] stage 00:00:04.091 [Pipeline] { (Prologue) 00:00:04.113 [Pipeline] echo 00:00:04.114 Node: VM-host-WFP7 00:00:04.121 [Pipeline] cleanWs 00:00:04.131 [WS-CLEANUP] Deleting project workspace... 00:00:04.131 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.137 [WS-CLEANUP] done 00:00:04.342 [Pipeline] setCustomBuildProperty 00:00:04.397 [Pipeline] httpRequest 00:00:04.902 [Pipeline] echo 00:00:04.904 Sorcerer 10.211.164.101 is alive 00:00:04.913 [Pipeline] retry 00:00:04.915 [Pipeline] { 00:00:04.929 [Pipeline] httpRequest 00:00:04.934 HttpMethod: GET 00:00:04.935 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.935 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.936 Response Code: HTTP/1.1 200 OK 00:00:04.936 Success: Status code 200 is in the accepted range: 200,404 00:00:04.937 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.225 [Pipeline] } 00:00:05.236 [Pipeline] // retry 00:00:05.241 [Pipeline] sh 00:00:05.524 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.537 [Pipeline] httpRequest 00:00:06.644 [Pipeline] echo 00:00:06.646 Sorcerer 10.211.164.101 is alive 00:00:06.655 [Pipeline] retry 00:00:06.657 [Pipeline] { 00:00:06.672 [Pipeline] httpRequest 00:00:06.676 HttpMethod: GET 00:00:06.677 URL: http://10.211.164.101/packages/spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:00:06.678 Sending request to url: http://10.211.164.101/packages/spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:00:06.683 Response Code: HTTP/1.1 200 OK 00:00:06.684 Success: Status code 200 is in the accepted range: 200,404 00:00:06.684 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:01:37.889 [Pipeline] } 00:01:37.907 [Pipeline] // retry 00:01:37.915 [Pipeline] sh 00:01:38.202 + tar --no-same-owner -xf spdk_06bc8ce530f0c3f5d5947668cc624adba5375403.tar.gz 00:01:40.757 [Pipeline] sh 00:01:41.045 + git -C spdk log --oneline -n5 00:01:41.045 06bc8ce53 lib/vhost: use RB_TREE for vhost device management 00:01:41.045 b264e22f0 accel/error: fix callback type for tasks in a sequence 00:01:41.045 0732c1430 accel/error: don't submit tasks intended to fail 00:01:41.045 b53b961c8 accel/error: move interval check to a function 00:01:41.045 c9f92cbfa accel/error: check interval before submission 00:01:41.066 [Pipeline] withCredentials 00:01:41.078 > git --version # timeout=10 00:01:41.093 > git --version # 'git version 2.39.2' 00:01:41.111 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:41.113 [Pipeline] { 00:01:41.123 [Pipeline] retry 00:01:41.125 [Pipeline] { 00:01:41.140 [Pipeline] sh 00:01:41.424 + git ls-remote http://dpdk.org/git/dpdk main 00:01:41.439 [Pipeline] } 00:01:41.457 [Pipeline] // retry 00:01:41.463 [Pipeline] } 00:01:41.479 [Pipeline] // withCredentials 00:01:41.489 [Pipeline] httpRequest 00:01:41.890 [Pipeline] echo 00:01:41.891 Sorcerer 10.211.164.101 is alive 00:01:41.901 [Pipeline] retry 00:01:41.904 [Pipeline] { 00:01:41.917 [Pipeline] httpRequest 00:01:41.922 HttpMethod: GET 00:01:41.923 URL: http://10.211.164.101/packages/dpdk_25e5845b5272764d8c2cbf64a9fc5989b34a932c.tar.gz 00:01:41.924 Sending request to url: http://10.211.164.101/packages/dpdk_25e5845b5272764d8c2cbf64a9fc5989b34a932c.tar.gz 00:01:41.929 Response Code: HTTP/1.1 200 OK 00:01:41.930 Success: Status code 200 is in the accepted range: 200,404 00:01:41.930 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_25e5845b5272764d8c2cbf64a9fc5989b34a932c.tar.gz 00:01:54.309 [Pipeline] } 00:01:54.327 [Pipeline] // retry 00:01:54.335 [Pipeline] sh 00:01:54.676 + tar --no-same-owner -xf dpdk_25e5845b5272764d8c2cbf64a9fc5989b34a932c.tar.gz 00:01:56.069 [Pipeline] sh 00:01:56.352 + git -C dpdk log --oneline -n5 00:01:56.352 25e5845b52 net/dpaa2: support multiple flow rules extractions 00:01:56.352 4160359077 net/dpaa2: support VLAN traffic splitting 00:01:56.352 a0f8ddc412 net/dpaa2: add API to get endpoint name 00:01:56.352 7994a12c4e net/dpaa2: store drop priority in mbuf 00:01:56.352 00e928e970 net/dpaa2: improve DPDMUX error behavior settings 00:01:56.370 [Pipeline] writeFile 00:01:56.384 [Pipeline] sh 00:01:56.668 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:56.681 [Pipeline] sh 00:01:56.965 + cat autorun-spdk.conf 00:01:56.965 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.965 SPDK_RUN_ASAN=1 00:01:56.965 SPDK_RUN_UBSAN=1 00:01:56.965 SPDK_TEST_RAID=1 00:01:56.965 SPDK_TEST_NATIVE_DPDK=main 00:01:56.965 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:56.965 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:56.972 RUN_NIGHTLY=1 00:01:56.974 [Pipeline] } 00:01:56.989 [Pipeline] // stage 00:01:57.004 [Pipeline] stage 00:01:57.006 [Pipeline] { (Run VM) 00:01:57.019 [Pipeline] sh 00:01:57.303 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:57.303 + echo 'Start stage prepare_nvme.sh' 00:01:57.303 Start stage prepare_nvme.sh 00:01:57.303 + [[ -n 6 ]] 00:01:57.303 + disk_prefix=ex6 00:01:57.303 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:57.303 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:57.303 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:57.303 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.303 ++ SPDK_RUN_ASAN=1 00:01:57.303 ++ SPDK_RUN_UBSAN=1 00:01:57.303 ++ SPDK_TEST_RAID=1 00:01:57.303 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:57.303 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:57.303 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:57.303 ++ RUN_NIGHTLY=1 00:01:57.303 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:57.303 + nvme_files=() 00:01:57.303 + declare -A nvme_files 00:01:57.303 + backend_dir=/var/lib/libvirt/images/backends 00:01:57.303 + nvme_files['nvme.img']=5G 00:01:57.303 + nvme_files['nvme-cmb.img']=5G 00:01:57.303 + nvme_files['nvme-multi0.img']=4G 00:01:57.304 + nvme_files['nvme-multi1.img']=4G 00:01:57.304 + nvme_files['nvme-multi2.img']=4G 00:01:57.304 + nvme_files['nvme-openstack.img']=8G 00:01:57.304 + nvme_files['nvme-zns.img']=5G 00:01:57.304 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:57.304 + (( SPDK_TEST_FTL == 1 )) 00:01:57.304 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:57.304 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:57.304 + for nvme in "${!nvme_files[@]}" 00:01:57.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:57.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.304 + for nvme in "${!nvme_files[@]}" 00:01:57.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:57.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.304 + for nvme in "${!nvme_files[@]}" 00:01:57.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:57.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:57.304 + for nvme in "${!nvme_files[@]}" 00:01:57.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:57.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.304 + for nvme in "${!nvme_files[@]}" 00:01:57.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:57.304 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.304 + for nvme in "${!nvme_files[@]}" 00:01:57.304 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:57.563 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:57.563 + for nvme in "${!nvme_files[@]}" 00:01:57.563 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:57.563 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:57.563 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:57.563 + echo 'End stage prepare_nvme.sh' 00:01:57.563 End stage prepare_nvme.sh 00:01:57.575 [Pipeline] sh 00:01:57.858 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:57.859 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:57.859 00:01:57.859 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:57.859 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:57.859 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:57.859 HELP=0 00:01:57.859 DRY_RUN=0 00:01:57.859 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:57.859 NVME_DISKS_TYPE=nvme,nvme, 00:01:57.859 NVME_AUTO_CREATE=0 00:01:57.859 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:57.859 NVME_CMB=,, 00:01:57.859 NVME_PMR=,, 00:01:57.859 NVME_ZNS=,, 00:01:57.859 NVME_MS=,, 00:01:57.859 NVME_FDP=,, 00:01:57.859 SPDK_VAGRANT_DISTRO=fedora39 00:01:57.859 SPDK_VAGRANT_VMCPU=10 00:01:57.859 SPDK_VAGRANT_VMRAM=12288 00:01:57.859 SPDK_VAGRANT_PROVIDER=libvirt 00:01:57.859 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:57.859 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:57.859 SPDK_OPENSTACK_NETWORK=0 00:01:57.859 VAGRANT_PACKAGE_BOX=0 00:01:57.859 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:57.859 FORCE_DISTRO=true 00:01:57.859 VAGRANT_BOX_VERSION= 00:01:57.859 EXTRA_VAGRANTFILES= 00:01:57.859 NIC_MODEL=virtio 00:01:57.859 00:01:57.859 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:57.859 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:59.765 Bringing machine 'default' up with 'libvirt' provider... 00:02:00.333 ==> default: Creating image (snapshot of base box volume). 00:02:00.333 ==> default: Creating domain with the following settings... 00:02:00.333 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731251466_f6976a13b69d9f967258 00:02:00.333 ==> default: -- Domain type: kvm 00:02:00.333 ==> default: -- Cpus: 10 00:02:00.333 ==> default: -- Feature: acpi 00:02:00.333 ==> default: -- Feature: apic 00:02:00.333 ==> default: -- Feature: pae 00:02:00.333 ==> default: -- Memory: 12288M 00:02:00.333 ==> default: -- Memory Backing: hugepages: 00:02:00.333 ==> default: -- Management MAC: 00:02:00.333 ==> default: -- Loader: 00:02:00.333 ==> default: -- Nvram: 00:02:00.333 ==> default: -- Base box: spdk/fedora39 00:02:00.333 ==> default: -- Storage pool: default 00:02:00.333 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731251466_f6976a13b69d9f967258.img (20G) 00:02:00.333 ==> default: -- Volume Cache: default 00:02:00.333 ==> default: -- Kernel: 00:02:00.334 ==> default: -- Initrd: 00:02:00.334 ==> default: -- Graphics Type: vnc 00:02:00.334 ==> default: -- Graphics Port: -1 00:02:00.334 ==> default: -- Graphics IP: 127.0.0.1 00:02:00.334 ==> default: -- Graphics Password: Not defined 00:02:00.334 ==> default: -- Video Type: cirrus 00:02:00.334 ==> default: -- Video VRAM: 9216 00:02:00.334 ==> default: -- Sound Type: 00:02:00.334 ==> default: -- Keymap: en-us 00:02:00.334 ==> default: -- TPM Path: 00:02:00.334 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:00.334 ==> default: -- Command line args: 00:02:00.334 ==> default: -> value=-device, 00:02:00.334 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:00.334 ==> default: -> value=-drive, 00:02:00.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:02:00.334 ==> default: -> value=-device, 00:02:00.334 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.334 ==> default: -> value=-device, 00:02:00.334 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:00.334 ==> default: -> value=-drive, 00:02:00.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:00.334 ==> default: -> value=-device, 00:02:00.334 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.334 ==> default: -> value=-drive, 00:02:00.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:00.334 ==> default: -> value=-device, 00:02:00.334 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.334 ==> default: -> value=-drive, 00:02:00.334 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:00.334 ==> default: -> value=-device, 00:02:00.334 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:00.593 ==> default: Creating shared folders metadata... 00:02:00.593 ==> default: Starting domain. 00:02:01.972 ==> default: Waiting for domain to get an IP address... 00:02:20.069 ==> default: Waiting for SSH to become available... 00:02:20.069 ==> default: Configuring and enabling network interfaces... 00:02:25.433 default: SSH address: 192.168.121.210:22 00:02:25.433 default: SSH username: vagrant 00:02:25.433 default: SSH auth method: private key 00:02:27.972 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:36.101 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:42.678 ==> default: Mounting SSHFS shared folder... 00:02:45.216 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:45.216 ==> default: Checking Mount.. 00:02:46.598 ==> default: Folder Successfully Mounted! 00:02:46.598 ==> default: Running provisioner: file... 00:02:47.538 default: ~/.gitconfig => .gitconfig 00:02:48.479 00:02:48.479 SUCCESS! 00:02:48.479 00:02:48.479 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:48.479 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:48.479 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:48.479 00:02:48.489 [Pipeline] } 00:02:48.504 [Pipeline] // stage 00:02:48.513 [Pipeline] dir 00:02:48.514 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:48.516 [Pipeline] { 00:02:48.528 [Pipeline] catchError 00:02:48.530 [Pipeline] { 00:02:48.541 [Pipeline] sh 00:02:48.847 + vagrant ssh-config --host vagrant 00:02:48.848 + sed -ne /^Host/,$p 00:02:48.848 + tee ssh_conf 00:02:51.388 Host vagrant 00:02:51.388 HostName 192.168.121.210 00:02:51.388 User vagrant 00:02:51.388 Port 22 00:02:51.388 UserKnownHostsFile /dev/null 00:02:51.388 StrictHostKeyChecking no 00:02:51.388 PasswordAuthentication no 00:02:51.388 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:51.388 IdentitiesOnly yes 00:02:51.388 LogLevel FATAL 00:02:51.388 ForwardAgent yes 00:02:51.388 ForwardX11 yes 00:02:51.388 00:02:51.403 [Pipeline] withEnv 00:02:51.406 [Pipeline] { 00:02:51.420 [Pipeline] sh 00:02:51.708 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:51.708 source /etc/os-release 00:02:51.708 [[ -e /image.version ]] && img=$(< /image.version) 00:02:51.708 # Minimal, systemd-like check. 00:02:51.708 if [[ -e /.dockerenv ]]; then 00:02:51.708 # Clear garbage from the node's name: 00:02:51.708 # agt-er_autotest_547-896 -> autotest_547-896 00:02:51.708 # $HOSTNAME is the actual container id 00:02:51.708 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:51.708 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:51.708 # We can assume this is a mount from a host where container is running, 00:02:51.708 # so fetch its hostname to easily identify the target swarm worker. 00:02:51.708 container="$(< /etc/hostname) ($agent)" 00:02:51.708 else 00:02:51.708 # Fallback 00:02:51.708 container=$agent 00:02:51.708 fi 00:02:51.708 fi 00:02:51.708 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:51.708 00:02:51.981 [Pipeline] } 00:02:51.999 [Pipeline] // withEnv 00:02:52.008 [Pipeline] setCustomBuildProperty 00:02:52.040 [Pipeline] stage 00:02:52.043 [Pipeline] { (Tests) 00:02:52.062 [Pipeline] sh 00:02:52.346 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:52.618 [Pipeline] sh 00:02:52.897 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:53.173 [Pipeline] timeout 00:02:53.173 Timeout set to expire in 1 hr 30 min 00:02:53.175 [Pipeline] { 00:02:53.190 [Pipeline] sh 00:02:53.472 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:54.041 HEAD is now at 06bc8ce53 lib/vhost: use RB_TREE for vhost device management 00:02:54.067 [Pipeline] sh 00:02:54.394 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:54.669 [Pipeline] sh 00:02:54.954 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:55.231 [Pipeline] sh 00:02:55.517 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:55.776 ++ readlink -f spdk_repo 00:02:55.776 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:55.776 + [[ -n /home/vagrant/spdk_repo ]] 00:02:55.776 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:55.776 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:55.776 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:55.776 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:55.776 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:55.776 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:55.776 + cd /home/vagrant/spdk_repo 00:02:55.776 + source /etc/os-release 00:02:55.776 ++ NAME='Fedora Linux' 00:02:55.776 ++ VERSION='39 (Cloud Edition)' 00:02:55.776 ++ ID=fedora 00:02:55.776 ++ VERSION_ID=39 00:02:55.776 ++ VERSION_CODENAME= 00:02:55.776 ++ PLATFORM_ID=platform:f39 00:02:55.776 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:55.776 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:55.776 ++ LOGO=fedora-logo-icon 00:02:55.776 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:55.776 ++ HOME_URL=https://fedoraproject.org/ 00:02:55.776 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:55.776 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:55.776 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:55.776 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:55.776 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:55.776 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:55.776 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:55.776 ++ SUPPORT_END=2024-11-12 00:02:55.776 ++ VARIANT='Cloud Edition' 00:02:55.776 ++ VARIANT_ID=cloud 00:02:55.776 + uname -a 00:02:55.777 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:55.777 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:56.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:56.350 Hugepages 00:02:56.350 node hugesize free / total 00:02:56.350 node0 1048576kB 0 / 0 00:02:56.350 node0 2048kB 0 / 0 00:02:56.350 00:02:56.350 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.350 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:56.350 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:56.350 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:56.350 + rm -f /tmp/spdk-ld-path 00:02:56.350 + source autorun-spdk.conf 00:02:56.350 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.350 ++ SPDK_RUN_ASAN=1 00:02:56.350 ++ SPDK_RUN_UBSAN=1 00:02:56.350 ++ SPDK_TEST_RAID=1 00:02:56.350 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:56.350 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.350 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.350 ++ RUN_NIGHTLY=1 00:02:56.350 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:56.350 + [[ -n '' ]] 00:02:56.350 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:56.611 + for M in /var/spdk/build-*-manifest.txt 00:02:56.611 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:56.612 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.612 + for M in /var/spdk/build-*-manifest.txt 00:02:56.612 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:56.612 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.612 + for M in /var/spdk/build-*-manifest.txt 00:02:56.612 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:56.612 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:56.612 ++ uname 00:02:56.612 + [[ Linux == \L\i\n\u\x ]] 00:02:56.612 + sudo dmesg -T 00:02:56.612 + sudo dmesg --clear 00:02:56.612 + dmesg_pid=6156 00:02:56.612 + sudo dmesg -Tw 00:02:56.612 + [[ Fedora Linux == FreeBSD ]] 00:02:56.612 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.612 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:56.612 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:56.612 + [[ -x /usr/src/fio-static/fio ]] 00:02:56.612 + export FIO_BIN=/usr/src/fio-static/fio 00:02:56.612 + FIO_BIN=/usr/src/fio-static/fio 00:02:56.612 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:56.612 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:56.612 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:56.612 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.612 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:56.612 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:56.612 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.612 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:56.612 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.939 15:12:02 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:56.939 15:12:02 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=main 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:56.939 15:12:02 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:02:56.939 15:12:02 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:56.939 15:12:02 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:56.939 15:12:03 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:56.939 15:12:03 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:56.939 15:12:03 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:56.939 15:12:03 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:56.939 15:12:03 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:56.939 15:12:03 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:56.939 15:12:03 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.939 15:12:03 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.939 15:12:03 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.939 15:12:03 -- paths/export.sh@5 -- $ export PATH 00:02:56.939 15:12:03 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:56.939 15:12:03 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:56.939 15:12:03 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:56.939 15:12:03 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731251523.XXXXXX 00:02:56.939 15:12:03 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731251523.zJ9Aul 00:02:56.939 15:12:03 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:56.939 15:12:03 -- common/autobuild_common.sh@492 -- $ '[' -n main ']' 00:02:56.939 15:12:03 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:56.939 15:12:03 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:56.939 15:12:03 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:56.939 15:12:03 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:56.939 15:12:03 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:56.939 15:12:03 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:56.939 15:12:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:56.939 15:12:03 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:56.939 15:12:03 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:56.939 15:12:03 -- pm/common@17 -- $ local monitor 00:02:56.939 15:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.939 15:12:03 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:56.939 15:12:03 -- pm/common@25 -- $ sleep 1 00:02:56.939 15:12:03 -- pm/common@21 -- $ date +%s 00:02:56.939 15:12:03 -- pm/common@21 -- $ date +%s 00:02:56.939 15:12:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731251523 00:02:56.939 15:12:03 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731251523 00:02:56.939 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731251523_collect-vmstat.pm.log 00:02:56.939 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731251523_collect-cpu-load.pm.log 00:02:57.893 15:12:04 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:57.893 15:12:04 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:57.893 15:12:04 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:57.893 15:12:04 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:57.893 15:12:04 -- spdk/autobuild.sh@16 -- $ date -u 00:02:57.893 Sun Nov 10 03:12:04 PM UTC 2024 00:02:57.893 15:12:04 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:57.893 v25.01-pre-176-g06bc8ce53 00:02:57.893 15:12:04 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:57.893 15:12:04 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:57.893 15:12:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:57.893 15:12:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:57.893 15:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.893 ************************************ 00:02:57.893 START TEST asan 00:02:57.893 ************************************ 00:02:57.893 using asan 00:02:57.893 15:12:04 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:57.893 00:02:57.893 real 0m0.001s 00:02:57.893 user 0m0.000s 00:02:57.893 sys 0m0.000s 00:02:57.893 15:12:04 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:57.893 15:12:04 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:57.893 ************************************ 00:02:57.893 END TEST asan 00:02:57.893 ************************************ 00:02:57.893 15:12:04 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:57.893 15:12:04 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:57.893 15:12:04 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:57.893 15:12:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:57.893 15:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.154 ************************************ 00:02:58.154 START TEST ubsan 00:02:58.154 ************************************ 00:02:58.154 using ubsan 00:02:58.154 15:12:04 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:58.154 00:02:58.154 real 0m0.000s 00:02:58.154 user 0m0.000s 00:02:58.154 sys 0m0.000s 00:02:58.154 15:12:04 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:58.154 15:12:04 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:58.154 ************************************ 00:02:58.154 END TEST ubsan 00:02:58.154 ************************************ 00:02:58.154 15:12:04 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:58.154 15:12:04 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:58.154 15:12:04 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:58.154 15:12:04 -- common/autotest_common.sh@1103 -- $ '[' 2 -le 1 ']' 00:02:58.154 15:12:04 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:58.154 15:12:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:58.154 ************************************ 00:02:58.154 START TEST build_native_dpdk 00:02:58.154 ************************************ 00:02:58.154 15:12:04 build_native_dpdk -- common/autotest_common.sh@1127 -- $ _build_native_dpdk 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:58.154 25e5845b52 net/dpaa2: support multiple flow rules extractions 00:02:58.154 4160359077 net/dpaa2: support VLAN traffic splitting 00:02:58.154 a0f8ddc412 net/dpaa2: add API to get endpoint name 00:02:58.154 7994a12c4e net/dpaa2: store drop priority in mbuf 00:02:58.154 00e928e970 net/dpaa2: improve DPDMUX error behavior settings 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc1 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:58.154 15:12:04 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.11.0-rc1 21.11.0 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 21.11.0 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:58.155 patching file config/rte_config.h 00:02:58.155 Hunk #1 succeeded at 72 (offset 13 lines). 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc1 24.07.0 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc1 '<' 24.07.0 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:58.155 15:12:04 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 24.11.0-rc1 24.07.0 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc1 '>=' 24.07.0 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ )) 00:02:58.155 15:12:04 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]] 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:58.156 15:12:04 build_native_dpdk -- scripts/common.sh@367 -- $ return 0 00:02:58.156 15:12:04 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:58.156 patching file drivers/bus/pci/linux/pci_uio.c 00:02:58.156 15:12:04 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:58.156 15:12:04 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:58.156 15:12:04 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:58.156 15:12:04 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:58.156 15:12:04 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:04.734 The Meson build system 00:03:04.734 Version: 1.5.0 00:03:04.734 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:04.734 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:04.734 Build type: native build 00:03:04.734 Project name: DPDK 00:03:04.734 Project version: 24.11.0-rc1 00:03:04.734 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.734 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:04.735 Host machine cpu family: x86_64 00:03:04.735 Host machine cpu: x86_64 00:03:04.735 Message: ## Building in Developer Mode ## 00:03:04.735 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:04.735 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:04.735 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:04.735 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:03:04.735 Program cat found: YES (/usr/bin/cat) 00:03:04.735 config/meson.build:119: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:04.735 Compiler for C supports arguments -march=native: YES 00:03:04.735 Checking for size of "void *" : 8 00:03:04.735 Checking for size of "void *" : 8 (cached) 00:03:04.735 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:04.735 Library m found: YES 00:03:04.735 Library numa found: YES 00:03:04.735 Has header "numaif.h" : YES 00:03:04.735 Library fdt found: NO 00:03:04.735 Library execinfo found: NO 00:03:04.735 Has header "execinfo.h" : YES 00:03:04.735 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.735 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:04.735 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:04.735 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:04.735 Run-time dependency openssl found: YES 3.1.1 00:03:04.735 Run-time dependency libpcap found: YES 1.10.4 00:03:04.735 Has header "pcap.h" with dependency libpcap: YES 00:03:04.735 Compiler for C supports arguments -Wcast-qual: YES 00:03:04.735 Compiler for C supports arguments -Wdeprecated: YES 00:03:04.735 Compiler for C supports arguments -Wformat: YES 00:03:04.735 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:04.735 Compiler for C supports arguments -Wformat-security: NO 00:03:04.735 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.735 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:04.735 Compiler for C supports arguments -Wnested-externs: YES 00:03:04.735 Compiler for C supports arguments -Wold-style-definition: YES 00:03:04.735 Compiler for C supports arguments -Wpointer-arith: YES 00:03:04.735 Compiler for C supports arguments -Wsign-compare: YES 00:03:04.735 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:04.735 Compiler for C supports arguments -Wundef: YES 00:03:04.735 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.735 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:04.735 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.735 Program objdump found: YES (/usr/bin/objdump) 00:03:04.735 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 00:03:04.735 Checking if "AVX512 checking" compiles: YES 00:03:04.735 Fetching value of define "__AVX512F__" : 1 00:03:04.735 Fetching value of define "__AVX512BW__" : 1 00:03:04.735 Fetching value of define "__AVX512DQ__" : 1 00:03:04.735 Fetching value of define "__AVX512VL__" : 1 00:03:04.735 Fetching value of define "__SSE4_2__" : 1 00:03:04.735 Fetching value of define "__AES__" : 1 00:03:04.735 Fetching value of define "__AVX__" : 1 00:03:04.735 Fetching value of define "__AVX2__" : 1 00:03:04.735 Fetching value of define "__AVX512BW__" : 1 00:03:04.735 Fetching value of define "__AVX512CD__" : 1 00:03:04.735 Fetching value of define "__AVX512DQ__" : 1 00:03:04.735 Fetching value of define "__AVX512F__" : 1 00:03:04.735 Fetching value of define "__AVX512VL__" : 1 00:03:04.735 Fetching value of define "__PCLMUL__" : 1 00:03:04.735 Fetching value of define "__RDRND__" : 1 00:03:04.735 Fetching value of define "__RDSEED__" : 1 00:03:04.735 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:04.735 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:04.735 Message: lib/log: Defining dependency "log" 00:03:04.735 Message: lib/kvargs: Defining dependency "kvargs" 00:03:04.735 Message: lib/argparse: Defining dependency "argparse" 00:03:04.735 Message: lib/telemetry: Defining dependency "telemetry" 00:03:04.735 Checking for function "pthread_attr_setaffinity_np" : YES 00:03:04.735 Checking for function "getentropy" : NO 00:03:04.735 Message: lib/eal: Defining dependency "eal" 00:03:04.735 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:03:04.735 Message: lib/ring: Defining dependency "ring" 00:03:04.735 Message: lib/rcu: Defining dependency "rcu" 00:03:04.735 Message: lib/mempool: Defining dependency "mempool" 00:03:04.735 Message: lib/mbuf: Defining dependency "mbuf" 00:03:04.735 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:04.735 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:04.735 Compiler for C supports arguments -mpclmul: YES 00:03:04.735 Compiler for C supports arguments -maes: YES 00:03:04.735 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:04.735 Message: lib/net: Defining dependency "net" 00:03:04.735 Message: lib/meter: Defining dependency "meter" 00:03:04.735 Message: lib/ethdev: Defining dependency "ethdev" 00:03:04.735 Message: lib/pci: Defining dependency "pci" 00:03:04.735 Message: lib/cmdline: Defining dependency "cmdline" 00:03:04.735 Message: lib/metrics: Defining dependency "metrics" 00:03:04.735 Message: lib/hash: Defining dependency "hash" 00:03:04.735 Message: lib/timer: Defining dependency "timer" 00:03:04.735 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.735 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:04.735 Fetching value of define "__AVX512CD__" : 1 (cached) 00:03:04.735 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:04.735 Message: lib/acl: Defining dependency "acl" 00:03:04.735 Message: lib/bbdev: Defining dependency "bbdev" 00:03:04.735 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:04.735 Run-time dependency libelf found: YES 0.191 00:03:04.735 Message: lib/bpf: Defining dependency "bpf" 00:03:04.735 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:04.735 Message: lib/compressdev: Defining dependency "compressdev" 00:03:04.735 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:04.735 Message: lib/distributor: Defining dependency "distributor" 00:03:04.735 Message: lib/dmadev: Defining dependency "dmadev" 00:03:04.735 Message: lib/efd: Defining dependency "efd" 00:03:04.735 Message: lib/eventdev: Defining dependency "eventdev" 00:03:04.735 Message: lib/dispatcher: Defining dependency "dispatcher" 00:03:04.735 Message: lib/gpudev: Defining dependency "gpudev" 00:03:04.735 Message: lib/gro: Defining dependency "gro" 00:03:04.735 Message: lib/gso: Defining dependency "gso" 00:03:04.735 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:04.735 Message: lib/jobstats: Defining dependency "jobstats" 00:03:04.735 Message: lib/latencystats: Defining dependency "latencystats" 00:03:04.735 Message: lib/lpm: Defining dependency "lpm" 00:03:04.735 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:04.735 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:04.735 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:04.735 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:04.735 Message: lib/member: Defining dependency "member" 00:03:04.735 Message: lib/pcapng: Defining dependency "pcapng" 00:03:04.735 Message: lib/power: Defining dependency "power" 00:03:04.735 Message: lib/rawdev: Defining dependency "rawdev" 00:03:04.735 Message: lib/regexdev: Defining dependency "regexdev" 00:03:04.735 Message: lib/mldev: Defining dependency "mldev" 00:03:04.735 Message: lib/rib: Defining dependency "rib" 00:03:04.735 Message: lib/reorder: Defining dependency "reorder" 00:03:04.735 Message: lib/sched: Defining dependency "sched" 00:03:04.735 Message: lib/security: Defining dependency "security" 00:03:04.735 Message: lib/stack: Defining dependency "stack" 00:03:04.735 Has header "linux/userfaultfd.h" : YES 00:03:04.735 Message: lib/vhost: Defining dependency "vhost" 00:03:04.735 Message: lib/ipsec: Defining dependency "ipsec" 00:03:04.735 Message: lib/pdcp: Defining dependency "pdcp" 00:03:04.735 Message: lib/fib: Defining dependency "fib" 00:03:04.735 Message: lib/port: Defining dependency "port" 00:03:04.735 Message: lib/pdump: Defining dependency "pdump" 00:03:04.735 Message: lib/table: Defining dependency "table" 00:03:04.735 Message: lib/pipeline: Defining dependency "pipeline" 00:03:04.735 Message: lib/graph: Defining dependency "graph" 00:03:04.735 Message: lib/node: Defining dependency "node" 00:03:04.735 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:04.735 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:04.735 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:04.735 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:04.735 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:04.735 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:04.735 Compiler for C supports arguments -Wno-unused-value: YES 00:03:04.735 Compiler for C supports arguments -Wno-format: YES 00:03:04.735 Compiler for C supports arguments -Wno-format-security: YES 00:03:04.735 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:04.735 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:04.735 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:04.735 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:05.676 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:05.676 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:05.676 Has header "sys/epoll.h" : YES 00:03:05.676 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:05.676 Configuring doxy-api-html.conf using configuration 00:03:05.676 doc/api/meson.build:54: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:03:05.676 Configuring doxy-api-man.conf using configuration 00:03:05.676 doc/api/meson.build:67: WARNING: The variable(s) 'DTS_API_MAIN_PAGE' in the input file 'doc/api/doxy-api.conf.in' are not present in the given configuration data. 00:03:05.676 Program mandb found: YES (/usr/bin/mandb) 00:03:05.676 Program sphinx-build found: NO 00:03:05.676 Program sphinx-build found: NO 00:03:05.676 Configuring rte_build_config.h using configuration 00:03:05.676 Message: 00:03:05.676 ================= 00:03:05.676 Applications Enabled 00:03:05.676 ================= 00:03:05.676 00:03:05.676 apps: 00:03:05.676 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:03:05.676 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:03:05.676 test-pmd, test-regex, test-sad, test-security-perf, 00:03:05.676 00:03:05.676 Message: 00:03:05.676 ================= 00:03:05.676 Libraries Enabled 00:03:05.676 ================= 00:03:05.676 00:03:05.676 libs: 00:03:05.676 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:03:05.676 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:03:05.676 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:03:05.676 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:03:05.676 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:03:05.676 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:03:05.676 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:03:05.676 graph, node, 00:03:05.676 00:03:05.676 Message: 00:03:05.676 =============== 00:03:05.676 Drivers Enabled 00:03:05.676 =============== 00:03:05.676 00:03:05.676 common: 00:03:05.676 00:03:05.676 bus: 00:03:05.676 pci, vdev, 00:03:05.676 mempool: 00:03:05.676 ring, 00:03:05.676 dma: 00:03:05.676 00:03:05.676 net: 00:03:05.676 i40e, 00:03:05.676 raw: 00:03:05.676 00:03:05.676 crypto: 00:03:05.676 00:03:05.676 compress: 00:03:05.676 00:03:05.676 regex: 00:03:05.676 00:03:05.676 ml: 00:03:05.676 00:03:05.676 vdpa: 00:03:05.676 00:03:05.676 event: 00:03:05.676 00:03:05.676 baseband: 00:03:05.676 00:03:05.676 gpu: 00:03:05.676 00:03:05.676 00:03:05.676 Message: 00:03:05.676 ================= 00:03:05.676 Content Skipped 00:03:05.676 ================= 00:03:05.676 00:03:05.676 apps: 00:03:05.676 00:03:05.676 libs: 00:03:05.676 00:03:05.676 drivers: 00:03:05.676 common/cpt: not in enabled drivers build config 00:03:05.676 common/dpaax: not in enabled drivers build config 00:03:05.676 common/iavf: not in enabled drivers build config 00:03:05.676 common/idpf: not in enabled drivers build config 00:03:05.676 common/ionic: not in enabled drivers build config 00:03:05.676 common/mvep: not in enabled drivers build config 00:03:05.676 common/octeontx: not in enabled drivers build config 00:03:05.676 bus/auxiliary: not in enabled drivers build config 00:03:05.676 bus/cdx: not in enabled drivers build config 00:03:05.676 bus/dpaa: not in enabled drivers build config 00:03:05.676 bus/fslmc: not in enabled drivers build config 00:03:05.676 bus/ifpga: not in enabled drivers build config 00:03:05.676 bus/platform: not in enabled drivers build config 00:03:05.676 bus/uacce: not in enabled drivers build config 00:03:05.676 bus/vmbus: not in enabled drivers build config 00:03:05.676 common/cnxk: not in enabled drivers build config 00:03:05.676 common/mlx5: not in enabled drivers build config 00:03:05.676 common/nfp: not in enabled drivers build config 00:03:05.676 common/nitrox: not in enabled drivers build config 00:03:05.676 common/qat: not in enabled drivers build config 00:03:05.676 common/sfc_efx: not in enabled drivers build config 00:03:05.676 mempool/bucket: not in enabled drivers build config 00:03:05.676 mempool/cnxk: not in enabled drivers build config 00:03:05.676 mempool/dpaa: not in enabled drivers build config 00:03:05.676 mempool/dpaa2: not in enabled drivers build config 00:03:05.676 mempool/octeontx: not in enabled drivers build config 00:03:05.676 mempool/stack: not in enabled drivers build config 00:03:05.676 dma/cnxk: not in enabled drivers build config 00:03:05.676 dma/dpaa: not in enabled drivers build config 00:03:05.676 dma/dpaa2: not in enabled drivers build config 00:03:05.676 dma/hisilicon: not in enabled drivers build config 00:03:05.676 dma/idxd: not in enabled drivers build config 00:03:05.676 dma/ioat: not in enabled drivers build config 00:03:05.676 dma/odm: not in enabled drivers build config 00:03:05.676 dma/skeleton: not in enabled drivers build config 00:03:05.676 net/af_packet: not in enabled drivers build config 00:03:05.676 net/af_xdp: not in enabled drivers build config 00:03:05.676 net/ark: not in enabled drivers build config 00:03:05.676 net/atlantic: not in enabled drivers build config 00:03:05.676 net/avp: not in enabled drivers build config 00:03:05.676 net/axgbe: not in enabled drivers build config 00:03:05.676 net/bnx2x: not in enabled drivers build config 00:03:05.676 net/bnxt: not in enabled drivers build config 00:03:05.676 net/bonding: not in enabled drivers build config 00:03:05.676 net/cnxk: not in enabled drivers build config 00:03:05.676 net/cpfl: not in enabled drivers build config 00:03:05.676 net/cxgbe: not in enabled drivers build config 00:03:05.676 net/dpaa: not in enabled drivers build config 00:03:05.676 net/dpaa2: not in enabled drivers build config 00:03:05.676 net/e1000: not in enabled drivers build config 00:03:05.676 net/ena: not in enabled drivers build config 00:03:05.676 net/enetc: not in enabled drivers build config 00:03:05.676 net/enetfec: not in enabled drivers build config 00:03:05.676 net/enic: not in enabled drivers build config 00:03:05.676 net/failsafe: not in enabled drivers build config 00:03:05.676 net/fm10k: not in enabled drivers build config 00:03:05.676 net/gve: not in enabled drivers build config 00:03:05.676 net/hinic: not in enabled drivers build config 00:03:05.676 net/hns3: not in enabled drivers build config 00:03:05.676 net/iavf: not in enabled drivers build config 00:03:05.676 net/ice: not in enabled drivers build config 00:03:05.676 net/idpf: not in enabled drivers build config 00:03:05.676 net/igc: not in enabled drivers build config 00:03:05.676 net/ionic: not in enabled drivers build config 00:03:05.676 net/ipn3ke: not in enabled drivers build config 00:03:05.676 net/ixgbe: not in enabled drivers build config 00:03:05.676 net/mana: not in enabled drivers build config 00:03:05.676 net/memif: not in enabled drivers build config 00:03:05.676 net/mlx4: not in enabled drivers build config 00:03:05.676 net/mlx5: not in enabled drivers build config 00:03:05.676 net/mvneta: not in enabled drivers build config 00:03:05.676 net/mvpp2: not in enabled drivers build config 00:03:05.676 net/netvsc: not in enabled drivers build config 00:03:05.676 net/nfb: not in enabled drivers build config 00:03:05.676 net/nfp: not in enabled drivers build config 00:03:05.676 net/ngbe: not in enabled drivers build config 00:03:05.676 net/ntnic: not in enabled drivers build config 00:03:05.676 net/null: not in enabled drivers build config 00:03:05.676 net/octeontx: not in enabled drivers build config 00:03:05.676 net/octeon_ep: not in enabled drivers build config 00:03:05.676 net/pcap: not in enabled drivers build config 00:03:05.676 net/pfe: not in enabled drivers build config 00:03:05.676 net/qede: not in enabled drivers build config 00:03:05.676 net/ring: not in enabled drivers build config 00:03:05.676 net/sfc: not in enabled drivers build config 00:03:05.676 net/softnic: not in enabled drivers build config 00:03:05.676 net/tap: not in enabled drivers build config 00:03:05.676 net/thunderx: not in enabled drivers build config 00:03:05.676 net/txgbe: not in enabled drivers build config 00:03:05.676 net/vdev_netvsc: not in enabled drivers build config 00:03:05.676 net/vhost: not in enabled drivers build config 00:03:05.676 net/virtio: not in enabled drivers build config 00:03:05.676 net/vmxnet3: not in enabled drivers build config 00:03:05.676 raw/cnxk_bphy: not in enabled drivers build config 00:03:05.677 raw/cnxk_gpio: not in enabled drivers build config 00:03:05.677 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:05.677 raw/ifpga: not in enabled drivers build config 00:03:05.677 raw/ntb: not in enabled drivers build config 00:03:05.677 raw/skeleton: not in enabled drivers build config 00:03:05.677 crypto/armv8: not in enabled drivers build config 00:03:05.677 crypto/bcmfs: not in enabled drivers build config 00:03:05.677 crypto/caam_jr: not in enabled drivers build config 00:03:05.677 crypto/ccp: not in enabled drivers build config 00:03:05.677 crypto/cnxk: not in enabled drivers build config 00:03:05.677 crypto/dpaa_sec: not in enabled drivers build config 00:03:05.677 crypto/dpaa2_sec: not in enabled drivers build config 00:03:05.677 crypto/ionic: not in enabled drivers build config 00:03:05.677 crypto/ipsec_mb: not in enabled drivers build config 00:03:05.677 crypto/mlx5: not in enabled drivers build config 00:03:05.677 crypto/mvsam: not in enabled drivers build config 00:03:05.677 crypto/nitrox: not in enabled drivers build config 00:03:05.677 crypto/null: not in enabled drivers build config 00:03:05.677 crypto/octeontx: not in enabled drivers build config 00:03:05.677 crypto/openssl: not in enabled drivers build config 00:03:05.677 crypto/scheduler: not in enabled drivers build config 00:03:05.677 crypto/uadk: not in enabled drivers build config 00:03:05.677 crypto/virtio: not in enabled drivers build config 00:03:05.677 compress/isal: not in enabled drivers build config 00:03:05.677 compress/mlx5: not in enabled drivers build config 00:03:05.677 compress/nitrox: not in enabled drivers build config 00:03:05.677 compress/octeontx: not in enabled drivers build config 00:03:05.677 compress/uadk: not in enabled drivers build config 00:03:05.677 compress/zlib: not in enabled drivers build config 00:03:05.677 regex/mlx5: not in enabled drivers build config 00:03:05.677 regex/cn9k: not in enabled drivers build config 00:03:05.677 ml/cnxk: not in enabled drivers build config 00:03:05.677 vdpa/ifc: not in enabled drivers build config 00:03:05.677 vdpa/mlx5: not in enabled drivers build config 00:03:05.677 vdpa/nfp: not in enabled drivers build config 00:03:05.677 vdpa/sfc: not in enabled drivers build config 00:03:05.677 event/cnxk: not in enabled drivers build config 00:03:05.677 event/dlb2: not in enabled drivers build config 00:03:05.677 event/dpaa: not in enabled drivers build config 00:03:05.677 event/dpaa2: not in enabled drivers build config 00:03:05.677 event/dsw: not in enabled drivers build config 00:03:05.677 event/opdl: not in enabled drivers build config 00:03:05.677 event/skeleton: not in enabled drivers build config 00:03:05.677 event/sw: not in enabled drivers build config 00:03:05.677 event/octeontx: not in enabled drivers build config 00:03:05.677 baseband/acc: not in enabled drivers build config 00:03:05.677 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:05.677 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:05.677 baseband/la12xx: not in enabled drivers build config 00:03:05.677 baseband/null: not in enabled drivers build config 00:03:05.677 baseband/turbo_sw: not in enabled drivers build config 00:03:05.677 gpu/cuda: not in enabled drivers build config 00:03:05.677 00:03:05.677 00:03:05.677 Build targets in project: 221 00:03:05.677 00:03:05.677 DPDK 24.11.0-rc1 00:03:05.677 00:03:05.677 User defined options 00:03:05.677 libdir : lib 00:03:05.677 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:05.677 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:05.677 c_link_args : 00:03:05.677 enable_docs : false 00:03:05.677 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:05.677 enable_kmods : false 00:03:05.677 machine : native 00:03:05.677 tests : false 00:03:05.677 00:03:05.677 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.677 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:05.937 15:12:12 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:05.937 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:05.937 [1/725] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o 00:03:05.937 [2/725] Compiling C object lib/librte_log.a.p/log_log_journal.c.o 00:03:05.937 [3/725] Compiling C object lib/librte_log.a.p/log_log_color.c.o 00:03:05.937 [4/725] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o 00:03:06.197 [5/725] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:06.197 [6/725] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:06.197 [7/725] Linking static target lib/librte_kvargs.a 00:03:06.197 [8/725] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:06.197 [9/725] Linking static target lib/librte_log.a 00:03:06.197 [10/725] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:03:06.197 [11/725] Linking static target lib/librte_argparse.a 00:03:06.457 [12/725] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.457 [13/725] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:06.457 [14/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:06.458 [15/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:06.458 [16/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:06.458 [17/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:06.458 [18/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:06.458 [19/725] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.458 [20/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:06.458 [21/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:06.717 [22/725] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.717 [23/725] Linking target lib/librte_log.so.25.0 00:03:06.717 [24/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:06.717 [25/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:06.717 [26/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o 00:03:06.717 [27/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:06.977 [28/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:06.977 [29/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:06.977 [30/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:06.977 [31/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:06.977 [32/725] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols 00:03:06.977 [33/725] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:06.977 [34/725] Linking target lib/librte_kvargs.so.25.0 00:03:06.977 [35/725] Linking target lib/librte_argparse.so.25.0 00:03:06.977 [36/725] Linking static target lib/librte_telemetry.a 00:03:06.977 [37/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:06.977 [38/725] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols 00:03:07.237 [39/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:07.237 [40/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:07.237 [41/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:07.237 [42/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:07.237 [43/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:07.497 [44/725] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.497 [45/725] Linking target lib/librte_telemetry.so.25.0 00:03:07.497 [46/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:07.497 [47/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:07.497 [48/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:07.497 [49/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:07.497 [50/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:07.497 [51/725] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols 00:03:07.497 [52/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:07.497 [53/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o 00:03:07.497 [54/725] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:07.758 [55/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:07.758 [56/725] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:07.758 [57/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:07.758 [58/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:08.018 [59/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:08.018 [60/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:08.018 [61/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:08.018 [62/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:08.018 [63/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:08.018 [64/725] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:08.284 [65/725] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:08.284 [66/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:08.284 [67/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:08.284 [68/725] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:08.284 [69/725] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:08.284 [70/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:08.284 [71/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:08.284 [72/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:08.284 [73/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:08.284 [74/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:08.554 [75/725] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:08.554 [76/725] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.554 [77/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.554 [78/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.554 [79/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.814 [80/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.814 [81/725] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.814 [82/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.814 [83/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.814 [84/725] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.814 [85/725] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.814 [86/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:09.074 [87/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.074 [88/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:09.074 [89/725] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.074 [90/725] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:03:09.074 [91/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:09.074 [92/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.074 [93/725] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.334 [94/725] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.334 [95/725] Linking static target lib/librte_ring.a 00:03:09.334 [96/725] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.334 [97/725] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.334 [98/725] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.334 [99/725] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.594 [100/725] Linking static target lib/librte_eal.a 00:03:09.594 [101/725] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.594 [102/725] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.594 [103/725] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.854 [104/725] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.854 [105/725] Linking static target lib/librte_mempool.a 00:03:09.854 [106/725] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:09.854 [107/725] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:09.854 [108/725] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:09.854 [109/725] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.114 [110/725] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.114 [111/725] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:10.114 [112/725] Linking static target lib/librte_rcu.a 00:03:10.114 [113/725] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.114 [114/725] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.374 [115/725] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.374 [116/725] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:10.374 [117/725] Linking static target lib/librte_meter.a 00:03:10.374 [118/725] Linking static target lib/librte_net.a 00:03:10.374 [119/725] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.374 [120/725] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.374 [121/725] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.374 [122/725] Linking static target lib/librte_mbuf.a 00:03:10.374 [123/725] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:10.634 [124/725] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.634 [125/725] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.634 [126/725] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:10.634 [127/725] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:10.894 [128/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:10.894 [129/725] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.154 [130/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:11.413 [131/725] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:11.413 [132/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:11.414 [133/725] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:11.673 [134/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:11.673 [135/725] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:11.673 [136/725] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:11.673 [137/725] Linking static target lib/librte_pci.a 00:03:11.673 [138/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:11.673 [139/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:11.673 [140/725] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:11.933 [141/725] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.933 [142/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:11.933 [143/725] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:11.933 [144/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:11.933 [145/725] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:11.933 [146/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:11.933 [147/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:11.933 [148/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:12.193 [149/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:12.193 [150/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:12.193 [151/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:12.193 [152/725] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:12.193 [153/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:12.193 [154/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:12.453 [155/725] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:12.453 [156/725] Linking static target lib/librte_cmdline.a 00:03:12.453 [157/725] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:12.453 [158/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:12.453 [159/725] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:12.453 [160/725] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:12.453 [161/725] Linking static target lib/librte_metrics.a 00:03:12.713 [162/725] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:12.713 [163/725] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:12.973 [164/725] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.233 [165/725] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.233 [166/725] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:13.233 [167/725] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:13.233 [168/725] Linking static target lib/librte_timer.a 00:03:13.493 [169/725] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:13.493 [170/725] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:13.493 [171/725] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.493 [172/725] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:13.493 [173/725] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:14.060 [174/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:14.060 [175/725] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:14.060 [176/725] Linking static target lib/librte_bitratestats.a 00:03:14.060 [177/725] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:14.320 [178/725] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.320 [179/725] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:14.320 [180/725] Linking static target lib/librte_bbdev.a 00:03:14.320 [181/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:14.887 [182/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:14.887 [183/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:14.887 [184/725] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.887 [185/725] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:14.887 [186/725] Linking static target lib/librte_hash.a 00:03:15.146 [187/725] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:03:15.146 [188/725] Linking static target lib/acl/libavx2_tmp.a 00:03:15.146 [189/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:15.146 [190/725] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:15.146 [191/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:15.146 [192/725] Linking static target lib/librte_ethdev.a 00:03:15.419 [193/725] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:15.419 [194/725] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.419 [195/725] Linking target lib/librte_eal.so.25.0 00:03:15.419 [196/725] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.419 [197/725] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:15.420 [198/725] Linking static target lib/librte_cfgfile.a 00:03:15.420 [199/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:15.678 [200/725] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols 00:03:15.678 [201/725] Linking target lib/librte_ring.so.25.0 00:03:15.678 [202/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:15.678 [203/725] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols 00:03:15.938 [204/725] Linking target lib/librte_meter.so.25.0 00:03:15.938 [205/725] Linking target lib/librte_rcu.so.25.0 00:03:15.938 [206/725] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.938 [207/725] Linking target lib/librte_mempool.so.25.0 00:03:15.938 [208/725] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:15.938 [209/725] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:15.938 [210/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:15.938 [211/725] Linking target lib/librte_pci.so.25.0 00:03:15.938 [212/725] Linking target lib/librte_timer.so.25.0 00:03:15.938 [213/725] Linking target lib/librte_cfgfile.so.25.0 00:03:15.938 [214/725] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols 00:03:15.938 [215/725] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:15.938 [216/725] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols 00:03:15.938 [217/725] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols 00:03:15.938 [218/725] Linking static target lib/librte_bpf.a 00:03:15.938 [219/725] Linking target lib/librte_mbuf.so.25.0 00:03:16.197 [220/725] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols 00:03:16.197 [221/725] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols 00:03:16.197 [222/725] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols 00:03:16.197 [223/725] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:16.197 [224/725] Linking target lib/librte_bbdev.so.25.0 00:03:16.197 [225/725] Linking target lib/librte_net.so.25.0 00:03:16.197 [226/725] Linking static target lib/librte_compressdev.a 00:03:16.197 [227/725] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.456 [228/725] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols 00:03:16.456 [229/725] Linking target lib/librte_cmdline.so.25.0 00:03:16.456 [230/725] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:03:16.456 [231/725] Linking static target lib/librte_acl.a 00:03:16.456 [232/725] Linking target lib/librte_hash.so.25.0 00:03:16.456 [233/725] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:16.456 [234/725] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols 00:03:16.715 [235/725] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:16.715 [236/725] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:16.715 [237/725] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:16.715 [238/725] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.715 [239/725] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.715 [240/725] Linking target lib/librte_compressdev.so.25.0 00:03:16.715 [241/725] Linking target lib/librte_acl.so.25.0 00:03:16.974 [242/725] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:16.974 [243/725] Linking static target lib/librte_distributor.a 00:03:16.974 [244/725] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols 00:03:16.974 [245/725] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:16.974 [246/725] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:17.234 [247/725] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.234 [248/725] Linking target lib/librte_distributor.so.25.0 00:03:17.234 [249/725] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:17.234 [250/725] Linking static target lib/librte_dmadev.a 00:03:17.493 [251/725] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:17.493 [252/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:17.753 [253/725] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.753 [254/725] Linking target lib/librte_dmadev.so.25.0 00:03:18.013 [255/725] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols 00:03:18.013 [256/725] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:18.013 [257/725] Linking static target lib/librte_efd.a 00:03:18.013 [258/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:03:18.273 [259/725] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.273 [260/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:18.273 [261/725] Linking target lib/librte_efd.so.25.0 00:03:18.273 [262/725] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.273 [263/725] Linking static target lib/librte_cryptodev.a 00:03:18.532 [264/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:18.532 [265/725] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:03:18.532 [266/725] Linking static target lib/librte_dispatcher.a 00:03:18.532 [267/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:18.792 [268/725] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:18.792 [269/725] Linking static target lib/librte_gpudev.a 00:03:18.792 [270/725] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:18.792 [271/725] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:19.051 [272/725] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:19.051 [273/725] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.051 [274/725] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:19.308 [275/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:19.572 [276/725] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:19.572 [277/725] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:19.572 [278/725] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.572 [279/725] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:19.572 [280/725] Linking target lib/librte_gpudev.so.25.0 00:03:19.572 [281/725] Linking static target lib/librte_gro.a 00:03:19.572 [282/725] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:19.572 [283/725] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:19.572 [284/725] Linking static target lib/librte_eventdev.a 00:03:19.572 [285/725] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:19.829 [286/725] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.829 [287/725] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:19.829 [288/725] Linking target lib/librte_cryptodev.so.25.0 00:03:19.829 [289/725] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.829 [290/725] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols 00:03:19.829 [291/725] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:19.829 [292/725] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:19.829 [293/725] Linking static target lib/librte_gso.a 00:03:20.086 [294/725] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.086 [295/725] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:20.344 [296/725] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:20.344 [297/725] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:20.344 [298/725] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:20.344 [299/725] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:20.344 [300/725] Linking static target lib/librte_jobstats.a 00:03:20.344 [301/725] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.344 [302/725] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:20.603 [303/725] Linking target lib/librte_ethdev.so.25.0 00:03:20.603 [304/725] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:20.603 [305/725] Linking static target lib/librte_ip_frag.a 00:03:20.603 [306/725] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.603 [307/725] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols 00:03:20.603 [308/725] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:20.603 [309/725] Linking target lib/librte_jobstats.so.25.0 00:03:20.603 [310/725] Linking target lib/librte_metrics.so.25.0 00:03:20.603 [311/725] Linking target lib/librte_bpf.so.25.0 00:03:20.603 [312/725] Linking target lib/librte_gro.so.25.0 00:03:20.862 [313/725] Linking target lib/librte_gso.so.25.0 00:03:20.862 [314/725] Linking static target lib/librte_latencystats.a 00:03:20.862 [315/725] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:20.862 [316/725] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:20.862 [317/725] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.862 [318/725] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols 00:03:20.862 [319/725] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols 00:03:20.862 [320/725] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:20.862 [321/725] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:20.862 [322/725] Linking target lib/librte_bitratestats.so.25.0 00:03:20.862 [323/725] Linking target lib/librte_ip_frag.so.25.0 00:03:20.862 [324/725] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:20.862 [325/725] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.122 [326/725] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols 00:03:21.122 [327/725] Linking target lib/librte_latencystats.so.25.0 00:03:21.122 [328/725] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:21.122 [329/725] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:21.122 [330/725] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:21.122 [331/725] Linking static target lib/librte_lpm.a 00:03:21.381 [332/725] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:21.381 [333/725] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:21.640 [334/725] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:21.640 [335/725] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.640 [336/725] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:21.640 [337/725] Linking target lib/librte_lpm.so.25.0 00:03:21.640 [338/725] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:21.640 [339/725] Linking static target lib/librte_pcapng.a 00:03:21.640 [340/725] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:21.640 [341/725] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols 00:03:21.898 [342/725] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:21.898 [343/725] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.898 [344/725] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:21.898 [345/725] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:21.898 [346/725] Linking target lib/librte_eventdev.so.25.0 00:03:21.899 [347/725] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.899 [348/725] Linking target lib/librte_pcapng.so.25.0 00:03:22.158 [349/725] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols 00:03:22.158 [350/725] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:22.158 [351/725] Linking target lib/librte_dispatcher.so.25.0 00:03:22.158 [352/725] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols 00:03:22.158 [353/725] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:22.158 [354/725] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:22.158 [355/725] Linking static target lib/librte_power.a 00:03:22.419 [356/725] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:22.419 [357/725] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:22.419 [358/725] Linking static target lib/librte_regexdev.a 00:03:22.419 [359/725] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:22.419 [360/725] Linking static target lib/librte_rawdev.a 00:03:22.419 [361/725] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:22.679 [362/725] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:22.679 [363/725] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:22.679 [364/725] Linking static target lib/librte_mldev.a 00:03:22.679 [365/725] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:22.679 [366/725] Linking static target lib/librte_member.a 00:03:22.679 [367/725] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:22.939 [368/725] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.939 [369/725] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.939 [370/725] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:22.939 [371/725] Linking target lib/librte_power.so.25.0 00:03:22.939 [372/725] Linking static target lib/librte_rib.a 00:03:22.939 [373/725] Linking target lib/librte_rawdev.so.25.0 00:03:22.939 [374/725] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:22.939 [375/725] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:22.939 [376/725] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:22.939 [377/725] Linking static target lib/librte_reorder.a 00:03:23.198 [378/725] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.198 [379/725] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.198 [380/725] Linking target lib/librte_regexdev.so.25.0 00:03:23.198 [381/725] Linking target lib/librte_member.so.25.0 00:03:23.198 [382/725] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:23.198 [383/725] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:23.198 [384/725] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.457 [385/725] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:23.457 [386/725] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.457 [387/725] Linking target lib/librte_reorder.so.25.0 00:03:23.457 [388/725] Linking target lib/librte_rib.so.25.0 00:03:23.457 [389/725] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols 00:03:23.457 [390/725] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols 00:03:23.457 [391/725] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:23.457 [392/725] Linking static target lib/librte_security.a 00:03:23.457 [393/725] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:23.457 [394/725] Linking static target lib/librte_stack.a 00:03:23.715 [395/725] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:23.715 [396/725] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.715 [397/725] Linking target lib/librte_stack.so.25.0 00:03:23.974 [398/725] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:23.974 [399/725] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:23.974 [400/725] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.974 [401/725] Linking target lib/librte_security.so.25.0 00:03:24.233 [402/725] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:24.233 [403/725] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols 00:03:24.233 [404/725] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:24.233 [405/725] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.233 [406/725] Linking target lib/librte_mldev.so.25.0 00:03:24.233 [407/725] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:24.233 [408/725] Linking static target lib/librte_sched.a 00:03:24.492 [409/725] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:24.751 [410/725] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:24.751 [411/725] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.751 [412/725] Linking target lib/librte_sched.so.25.0 00:03:24.751 [413/725] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:24.751 [414/725] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:25.010 [415/725] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols 00:03:25.010 [416/725] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:25.010 [417/725] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:25.269 [418/725] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:25.269 [419/725] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:25.269 [420/725] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:25.528 [421/725] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:25.528 [422/725] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:25.787 [423/725] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:25.787 [424/725] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:25.787 [425/725] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:25.787 [426/725] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:26.046 [427/725] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:26.046 [428/725] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:26.305 [429/725] Linking static target lib/librte_ipsec.a 00:03:26.564 [430/725] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:03:26.564 [431/725] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:26.564 [432/725] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:26.564 [433/725] Linking target lib/librte_ipsec.so.25.0 00:03:26.564 [434/725] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:03:26.564 [435/725] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols 00:03:26.822 [436/725] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:26.822 [437/725] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:26.822 [438/725] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:27.081 [439/725] Linking static target lib/librte_pdcp.a 00:03:27.081 [440/725] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:27.081 [441/725] Linking static target lib/librte_fib.a 00:03:27.081 [442/725] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:27.081 [443/725] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:27.341 [444/725] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:27.341 [445/725] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.341 [446/725] Linking target lib/librte_pdcp.so.25.0 00:03:27.341 [447/725] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.341 [448/725] Linking target lib/librte_fib.so.25.0 00:03:27.600 [449/725] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:27.600 [450/725] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:27.859 [451/725] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:27.859 [452/725] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:27.859 [453/725] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:27.859 [454/725] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:28.119 [455/725] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:28.378 [456/725] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:28.378 [457/725] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:28.378 [458/725] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:28.378 [459/725] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:28.378 [460/725] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:28.378 [461/725] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:28.637 [462/725] Linking static target lib/librte_port.a 00:03:28.637 [463/725] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:28.637 [464/725] Linking static target lib/librte_pdump.a 00:03:28.637 [465/725] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:28.896 [466/725] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.896 [467/725] Linking target lib/librte_pdump.so.25.0 00:03:28.896 [468/725] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:28.896 [469/725] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:28.896 [470/725] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.155 [471/725] Linking target lib/librte_port.so.25.0 00:03:29.155 [472/725] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols 00:03:29.414 [473/725] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:29.414 [474/725] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:29.414 [475/725] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:29.414 [476/725] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:29.414 [477/725] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:29.414 [478/725] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:29.414 [479/725] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:29.674 [480/725] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:29.933 [481/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:29.933 [482/725] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:29.933 [483/725] Linking static target lib/librte_table.a 00:03:30.193 [484/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:30.452 [485/725] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:30.452 [486/725] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:30.711 [487/725] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.711 [488/725] Linking target lib/librte_table.so.25.0 00:03:30.711 [489/725] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:30.711 [490/725] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols 00:03:30.711 [491/725] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:30.971 [492/725] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:30.971 [493/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:31.230 [494/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:31.230 [495/725] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:31.230 [496/725] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:31.230 [497/725] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:31.489 [498/725] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:31.748 [499/725] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:31.748 [500/725] Linking static target lib/librte_graph.a 00:03:31.748 [501/725] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:31.748 [502/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:31.748 [503/725] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:31.748 [504/725] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:32.008 [505/725] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:32.267 [506/725] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.267 [507/725] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:32.267 [508/725] Linking target lib/librte_graph.so.25.0 00:03:32.526 [509/725] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:32.526 [510/725] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols 00:03:32.526 [511/725] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:32.785 [512/725] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:32.785 [513/725] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:32.785 [514/725] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:32.785 [515/725] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:32.785 [516/725] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:33.044 [517/725] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:33.044 [518/725] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:33.044 [519/725] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:33.303 [520/725] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:33.303 [521/725] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:33.303 [522/725] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:33.303 [523/725] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:33.562 [524/725] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:33.562 [525/725] Linking static target lib/librte_node.a 00:03:33.562 [526/725] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:33.821 [527/725] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:33.821 [528/725] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.821 [529/725] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:33.821 [530/725] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:33.821 [531/725] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:33.821 [532/725] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:33.821 [533/725] Linking target lib/librte_node.so.25.0 00:03:34.080 [534/725] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:34.080 [535/725] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:34.080 [536/725] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:34.080 [537/725] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:34.080 [538/725] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:34.080 [539/725] Linking static target drivers/librte_bus_pci.a 00:03:34.080 [540/725] Linking static target drivers/librte_bus_vdev.a 00:03:34.339 [541/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:34.339 [542/725] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:34.339 [543/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:34.339 [544/725] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.339 [545/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:34.339 [546/725] Linking target drivers/librte_bus_vdev.so.25.0 00:03:34.599 [547/725] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:34.599 [548/725] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:34.599 [549/725] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols 00:03:34.599 [550/725] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:34.599 [551/725] Linking target drivers/librte_bus_pci.so.25.0 00:03:34.599 [552/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:34.599 [553/725] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:34.599 [554/725] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.599 [555/725] Linking static target drivers/librte_mempool_ring.a 00:03:34.858 [556/725] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:34.858 [557/725] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols 00:03:34.858 [558/725] Linking target drivers/librte_mempool_ring.so.25.0 00:03:34.858 [559/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:35.426 [560/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:35.427 [561/725] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:35.427 [562/725] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:35.686 [563/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:35.945 [564/725] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:35.945 [565/725] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:36.204 [566/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:36.463 [567/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:36.723 [568/725] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:36.723 [569/725] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:36.723 [570/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:36.723 [571/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:36.723 [572/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:36.723 [573/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:36.723 [574/725] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:36.982 [575/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:37.241 [576/725] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:37.241 [577/725] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:37.500 [578/725] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:37.759 [579/725] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:37.759 [580/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:38.019 [581/725] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:38.019 [582/725] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:38.019 [583/725] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:38.019 [584/725] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:38.019 [585/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:38.286 [586/725] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:38.286 [587/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:38.550 [588/725] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:38.550 [589/725] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:38.550 [590/725] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:38.550 [591/725] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:38.550 [592/725] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:38.809 [593/725] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:38.809 [594/725] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:38.809 [595/725] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:38.809 [596/725] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:39.069 [597/725] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:39.328 [598/725] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:39.328 [599/725] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:39.587 [600/725] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:39.587 [601/725] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:39.587 [602/725] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:39.587 [603/725] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:39.847 [604/725] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:39.847 [605/725] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:39.847 [606/725] Linking static target drivers/librte_net_i40e.a 00:03:39.847 [607/725] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:39.847 [608/725] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:40.106 [609/725] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:40.106 [610/725] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:40.106 [611/725] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:40.106 [612/725] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:40.365 [613/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:40.365 [614/725] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.624 [615/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:40.624 [616/725] Linking target drivers/librte_net_i40e.so.25.0 00:03:40.884 [617/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:40.884 [618/725] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:40.884 [619/725] Linking static target lib/librte_vhost.a 00:03:40.884 [620/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:40.884 [621/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:40.884 [622/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:40.884 [623/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:40.884 [624/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:40.884 [625/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:41.143 [626/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:41.402 [627/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:41.402 [628/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:41.402 [629/725] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:41.402 [630/725] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:41.402 [631/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:41.970 [632/725] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.970 [633/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:41.970 [634/725] Linking target lib/librte_vhost.so.25.0 00:03:41.970 [635/725] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:41.970 [636/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:42.229 [637/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:42.798 [638/725] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:42.798 [639/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:42.798 [640/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:42.798 [641/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:42.798 [642/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:42.798 [643/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:43.057 [644/725] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:43.057 [645/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:43.057 [646/725] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:43.317 [647/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:43.317 [648/725] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:43.317 [649/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:43.317 [650/725] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:43.317 [651/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:43.576 [652/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:43.576 [653/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:43.576 [654/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:43.576 [655/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:43.835 [656/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:43.835 [657/725] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:43.835 [658/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:44.095 [659/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:44.095 [660/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:44.095 [661/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:44.095 [662/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:44.354 [663/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:44.354 [664/725] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:44.354 [665/725] Linking static target lib/librte_pipeline.a 00:03:44.354 [666/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:44.354 [667/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:44.613 [668/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:44.613 [669/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:44.613 [670/725] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:44.872 [671/725] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:44.872 [672/725] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:44.873 [673/725] Linking target app/dpdk-dumpcap 00:03:44.873 [674/725] Linking target app/dpdk-graph 00:03:44.873 [675/725] Linking target app/dpdk-pdump 00:03:44.873 [676/725] Linking target app/dpdk-proc-info 00:03:44.873 [677/725] Linking target app/dpdk-test-acl 00:03:44.873 [678/725] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:45.131 [679/725] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:45.131 [680/725] Linking target app/dpdk-test-bbdev 00:03:45.131 [681/725] Linking target app/dpdk-test-cmdline 00:03:45.131 [682/725] Linking target app/dpdk-test-compress-perf 00:03:45.131 [683/725] Linking target app/dpdk-test-crypto-perf 00:03:45.391 [684/725] Linking target app/dpdk-test-dma-perf 00:03:45.391 [685/725] Linking target app/dpdk-test-fib 00:03:45.391 [686/725] Linking target app/dpdk-test-eventdev 00:03:45.391 [687/725] Linking target app/dpdk-test-flow-perf 00:03:45.391 [688/725] Linking target app/dpdk-test-gpudev 00:03:45.650 [689/725] Linking target app/dpdk-test-mldev 00:03:45.650 [690/725] Linking target app/dpdk-test-pipeline 00:03:45.650 [691/725] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:45.909 [692/725] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:45.909 [693/725] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:46.168 [694/725] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:46.168 [695/725] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:46.168 [696/725] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:46.428 [697/725] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:46.428 [698/725] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:46.428 [699/725] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:46.686 [700/725] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:46.686 [701/725] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:46.686 [702/725] Linking target lib/librte_pipeline.so.25.0 00:03:46.953 [703/725] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:46.953 [704/725] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:46.953 [705/725] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:46.953 [706/725] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:47.221 [707/725] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:47.221 [708/725] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:47.479 [709/725] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:47.479 [710/725] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:47.738 [711/725] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:47.738 [712/725] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:47.997 [713/725] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:47.997 [714/725] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:47.997 [715/725] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:47.997 [716/725] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:48.256 [717/725] Linking target app/dpdk-test-sad 00:03:48.256 [718/725] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:48.256 [719/725] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:48.256 [720/725] Linking target app/dpdk-test-regex 00:03:48.515 [721/725] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:48.515 [722/725] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:48.774 [723/725] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:48.774 [724/725] Linking target app/dpdk-test-security-perf 00:03:49.033 [725/725] Linking target app/dpdk-testpmd 00:03:49.033 15:12:55 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:49.033 15:12:55 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:49.033 15:12:55 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:49.292 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:49.292 [0/1] Installing files. 00:03:49.555 Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints 00:03:49.555 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.555 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.556 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:49.557 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:49.558 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:49.559 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:49.560 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:49.560 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.560 Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:49.821 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:49.821 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:49.821 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:49.821 Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0 00:03:49.821 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.821 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.822 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:49.823 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.084 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.084 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.084 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.084 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.084 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.084 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.085 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:50.086 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:50.086 Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25 00:03:50.086 Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:50.086 Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25 00:03:50.086 Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:50.086 Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25 00:03:50.086 Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:50.086 Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25 00:03:50.086 Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:50.086 Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25 00:03:50.086 Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:50.086 Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25 00:03:50.086 Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:50.086 Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25 00:03:50.086 Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:50.086 Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25 00:03:50.086 Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:50.086 Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25 00:03:50.086 Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:50.086 Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25 00:03:50.086 Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:50.086 Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25 00:03:50.086 Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:50.086 Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25 00:03:50.086 Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:50.086 Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25 00:03:50.086 Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:50.086 Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25 00:03:50.086 Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:50.086 Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25 00:03:50.086 Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:50.086 Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25 00:03:50.086 Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:50.087 Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25 00:03:50.087 Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:50.087 Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25 00:03:50.087 Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:50.087 Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25 00:03:50.087 Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:50.087 Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25 00:03:50.087 Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:50.087 Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25 00:03:50.087 Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:50.087 Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25 00:03:50.087 Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:50.087 Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25 00:03:50.087 Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:50.087 Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25 00:03:50.087 Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:50.087 Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25 00:03:50.087 Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:50.087 Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25 00:03:50.087 Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:50.087 Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25 00:03:50.087 Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:50.087 Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25 00:03:50.087 Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:50.087 Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25 00:03:50.087 Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:50.087 Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25 00:03:50.087 Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:50.087 Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25 00:03:50.087 Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:50.087 Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25 00:03:50.087 Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:50.087 Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25 00:03:50.087 Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:50.087 Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25 00:03:50.087 Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:50.087 Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25 00:03:50.087 Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:50.087 Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25 00:03:50.087 './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so' 00:03:50.087 './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25' 00:03:50.087 './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0' 00:03:50.087 './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so' 00:03:50.087 './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25' 00:03:50.087 './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0' 00:03:50.087 './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so' 00:03:50.087 './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25' 00:03:50.087 './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0' 00:03:50.087 './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so' 00:03:50.087 './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25' 00:03:50.087 './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0' 00:03:50.087 Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:50.087 Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25 00:03:50.087 Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:50.087 Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25 00:03:50.087 Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:50.087 Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25 00:03:50.087 Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:50.087 Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25 00:03:50.087 Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:50.087 Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25 00:03:50.087 Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:50.087 Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25 00:03:50.087 Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:50.087 Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25 00:03:50.087 Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:50.087 Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25 00:03:50.087 Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:50.087 Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25 00:03:50.087 Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:50.087 Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25 00:03:50.087 Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:50.087 Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25 00:03:50.087 Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:50.087 Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25 00:03:50.087 Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:50.087 Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25 00:03:50.087 Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:50.087 Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25 00:03:50.087 Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:50.087 Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25 00:03:50.087 Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:50.087 Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25 00:03:50.087 Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:50.087 Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25 00:03:50.087 Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:50.087 Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25 00:03:50.087 Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:50.087 Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25 00:03:50.087 Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:50.087 Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25 00:03:50.088 Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:50.088 Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25 00:03:50.088 Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:50.088 Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25 00:03:50.088 Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:03:50.088 Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25 00:03:50.088 Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:03:50.088 Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25 00:03:50.088 Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:03:50.088 Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25 00:03:50.088 Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:03:50.088 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0' 00:03:50.088 15:12:56 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:50.088 15:12:56 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:50.088 00:03:50.088 real 0m51.966s 00:03:50.088 user 5m57.182s 00:03:50.088 sys 1m3.608s 00:03:50.088 15:12:56 build_native_dpdk -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:50.088 15:12:56 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:50.088 ************************************ 00:03:50.088 END TEST build_native_dpdk 00:03:50.088 ************************************ 00:03:50.088 15:12:56 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:50.088 15:12:56 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:50.088 15:12:56 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:50.088 15:12:56 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:50.088 15:12:56 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:50.088 15:12:56 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:50.088 15:12:56 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:50.088 15:12:56 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:50.347 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:50.347 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:50.347 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:50.347 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:50.916 Using 'verbs' RDMA provider 00:04:09.960 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:24.859 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:24.859 Creating mk/config.mk...done. 00:04:24.859 Creating mk/cc.flags.mk...done. 00:04:24.859 Type 'make' to build. 00:04:24.859 15:13:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:24.859 15:13:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:04:24.859 15:13:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:04:24.859 15:13:29 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.859 ************************************ 00:04:24.859 START TEST make 00:04:24.859 ************************************ 00:04:24.859 15:13:29 make -- common/autotest_common.sh@1127 -- $ make -j10 00:04:24.859 make[1]: Nothing to be done for 'all'. 00:05:11.560 CC lib/ut_mock/mock.o 00:05:11.560 CC lib/ut/ut.o 00:05:11.560 CC lib/log/log.o 00:05:11.560 CC lib/log/log_deprecated.o 00:05:11.560 CC lib/log/log_flags.o 00:05:11.560 LIB libspdk_ut.a 00:05:11.560 LIB libspdk_ut_mock.a 00:05:11.560 LIB libspdk_log.a 00:05:11.560 SO libspdk_ut_mock.so.6.0 00:05:11.560 SO libspdk_ut.so.2.0 00:05:11.560 SO libspdk_log.so.7.1 00:05:11.560 SYMLINK libspdk_ut_mock.so 00:05:11.560 SYMLINK libspdk_ut.so 00:05:11.560 SYMLINK libspdk_log.so 00:05:11.560 CC lib/dma/dma.o 00:05:11.560 CXX lib/trace_parser/trace.o 00:05:11.560 CC lib/util/base64.o 00:05:11.560 CC lib/util/bit_array.o 00:05:11.560 CC lib/util/crc16.o 00:05:11.560 CC lib/util/crc32.o 00:05:11.560 CC lib/util/cpuset.o 00:05:11.560 CC lib/util/crc32c.o 00:05:11.560 CC lib/ioat/ioat.o 00:05:11.560 CC lib/vfio_user/host/vfio_user_pci.o 00:05:11.560 CC lib/util/crc32_ieee.o 00:05:11.560 CC lib/util/crc64.o 00:05:11.560 CC lib/util/dif.o 00:05:11.560 CC lib/util/fd.o 00:05:11.560 CC lib/vfio_user/host/vfio_user.o 00:05:11.560 LIB libspdk_dma.a 00:05:11.560 CC lib/util/fd_group.o 00:05:11.560 SO libspdk_dma.so.5.0 00:05:11.560 CC lib/util/file.o 00:05:11.560 CC lib/util/hexlify.o 00:05:11.560 CC lib/util/iov.o 00:05:11.560 SYMLINK libspdk_dma.so 00:05:11.560 CC lib/util/math.o 00:05:11.560 LIB libspdk_ioat.a 00:05:11.560 SO libspdk_ioat.so.7.0 00:05:11.560 CC lib/util/net.o 00:05:11.560 SYMLINK libspdk_ioat.so 00:05:11.560 CC lib/util/pipe.o 00:05:11.560 CC lib/util/strerror_tls.o 00:05:11.560 CC lib/util/string.o 00:05:11.560 LIB libspdk_vfio_user.a 00:05:11.560 SO libspdk_vfio_user.so.5.0 00:05:11.560 CC lib/util/uuid.o 00:05:11.560 CC lib/util/xor.o 00:05:11.560 SYMLINK libspdk_vfio_user.so 00:05:11.560 CC lib/util/zipf.o 00:05:11.560 CC lib/util/md5.o 00:05:11.560 LIB libspdk_util.a 00:05:11.560 SO libspdk_util.so.10.1 00:05:11.560 LIB libspdk_trace_parser.a 00:05:11.560 SYMLINK libspdk_util.so 00:05:11.560 SO libspdk_trace_parser.so.6.0 00:05:11.560 SYMLINK libspdk_trace_parser.so 00:05:11.560 CC lib/vmd/vmd.o 00:05:11.560 CC lib/vmd/led.o 00:05:11.560 CC lib/idxd/idxd.o 00:05:11.560 CC lib/idxd/idxd_user.o 00:05:11.560 CC lib/idxd/idxd_kernel.o 00:05:11.560 CC lib/json/json_parse.o 00:05:11.560 CC lib/json/json_util.o 00:05:11.560 CC lib/rdma_utils/rdma_utils.o 00:05:11.560 CC lib/env_dpdk/env.o 00:05:11.560 CC lib/conf/conf.o 00:05:11.560 CC lib/json/json_write.o 00:05:11.560 CC lib/env_dpdk/memory.o 00:05:11.560 CC lib/env_dpdk/pci.o 00:05:11.560 CC lib/env_dpdk/init.o 00:05:11.560 CC lib/env_dpdk/threads.o 00:05:11.560 LIB libspdk_conf.a 00:05:11.560 LIB libspdk_rdma_utils.a 00:05:11.560 SO libspdk_conf.so.6.0 00:05:11.560 SO libspdk_rdma_utils.so.1.0 00:05:11.560 SYMLINK libspdk_conf.so 00:05:11.560 SYMLINK libspdk_rdma_utils.so 00:05:11.560 CC lib/env_dpdk/pci_ioat.o 00:05:11.560 CC lib/env_dpdk/pci_virtio.o 00:05:11.560 CC lib/env_dpdk/pci_vmd.o 00:05:11.560 LIB libspdk_json.a 00:05:11.560 SO libspdk_json.so.6.0 00:05:11.560 CC lib/env_dpdk/pci_idxd.o 00:05:11.560 CC lib/env_dpdk/pci_event.o 00:05:11.560 CC lib/env_dpdk/sigbus_handler.o 00:05:11.560 SYMLINK libspdk_json.so 00:05:11.560 CC lib/env_dpdk/pci_dpdk.o 00:05:11.560 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:11.560 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:11.560 CC lib/rdma_provider/common.o 00:05:11.560 LIB libspdk_idxd.a 00:05:11.560 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:11.560 SO libspdk_idxd.so.12.1 00:05:11.560 LIB libspdk_vmd.a 00:05:11.560 CC lib/jsonrpc/jsonrpc_server.o 00:05:11.560 SO libspdk_vmd.so.6.0 00:05:11.560 SYMLINK libspdk_idxd.so 00:05:11.560 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:11.560 CC lib/jsonrpc/jsonrpc_client.o 00:05:11.560 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:11.560 SYMLINK libspdk_vmd.so 00:05:11.560 LIB libspdk_rdma_provider.a 00:05:11.560 SO libspdk_rdma_provider.so.7.0 00:05:11.560 SYMLINK libspdk_rdma_provider.so 00:05:11.560 LIB libspdk_jsonrpc.a 00:05:11.560 SO libspdk_jsonrpc.so.6.0 00:05:11.560 SYMLINK libspdk_jsonrpc.so 00:05:11.560 CC lib/rpc/rpc.o 00:05:11.560 LIB libspdk_env_dpdk.a 00:05:11.560 SO libspdk_env_dpdk.so.15.1 00:05:11.560 LIB libspdk_rpc.a 00:05:11.560 SYMLINK libspdk_env_dpdk.so 00:05:11.560 SO libspdk_rpc.so.6.0 00:05:11.560 SYMLINK libspdk_rpc.so 00:05:11.560 CC lib/trace/trace.o 00:05:11.560 CC lib/trace/trace_flags.o 00:05:11.560 CC lib/trace/trace_rpc.o 00:05:11.560 CC lib/keyring/keyring.o 00:05:11.560 CC lib/keyring/keyring_rpc.o 00:05:11.560 CC lib/notify/notify_rpc.o 00:05:11.560 CC lib/notify/notify.o 00:05:11.560 LIB libspdk_notify.a 00:05:11.560 SO libspdk_notify.so.6.0 00:05:11.560 LIB libspdk_keyring.a 00:05:11.560 SYMLINK libspdk_notify.so 00:05:11.560 LIB libspdk_trace.a 00:05:11.560 SO libspdk_keyring.so.2.0 00:05:11.560 SO libspdk_trace.so.11.0 00:05:11.560 SYMLINK libspdk_keyring.so 00:05:11.560 SYMLINK libspdk_trace.so 00:05:11.560 CC lib/sock/sock.o 00:05:11.560 CC lib/sock/sock_rpc.o 00:05:11.560 CC lib/thread/thread.o 00:05:11.560 CC lib/thread/iobuf.o 00:05:11.560 LIB libspdk_sock.a 00:05:11.560 SO libspdk_sock.so.10.0 00:05:11.560 SYMLINK libspdk_sock.so 00:05:11.560 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:11.560 CC lib/nvme/nvme_ctrlr.o 00:05:11.560 CC lib/nvme/nvme_fabric.o 00:05:11.560 CC lib/nvme/nvme_ns_cmd.o 00:05:11.560 CC lib/nvme/nvme_ns.o 00:05:11.560 CC lib/nvme/nvme_pcie_common.o 00:05:11.560 CC lib/nvme/nvme_pcie.o 00:05:11.560 CC lib/nvme/nvme.o 00:05:11.560 CC lib/nvme/nvme_qpair.o 00:05:12.498 CC lib/nvme/nvme_quirks.o 00:05:12.498 CC lib/nvme/nvme_transport.o 00:05:12.498 CC lib/nvme/nvme_discovery.o 00:05:12.498 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:12.498 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:12.498 LIB libspdk_thread.a 00:05:12.498 CC lib/nvme/nvme_tcp.o 00:05:12.498 SO libspdk_thread.so.11.0 00:05:12.498 CC lib/nvme/nvme_opal.o 00:05:12.498 SYMLINK libspdk_thread.so 00:05:12.498 CC lib/nvme/nvme_io_msg.o 00:05:12.757 CC lib/nvme/nvme_poll_group.o 00:05:12.757 CC lib/accel/accel.o 00:05:13.016 CC lib/accel/accel_rpc.o 00:05:13.016 CC lib/accel/accel_sw.o 00:05:13.016 CC lib/blob/blobstore.o 00:05:13.016 CC lib/nvme/nvme_zns.o 00:05:13.016 CC lib/blob/request.o 00:05:13.274 CC lib/init/json_config.o 00:05:13.274 CC lib/nvme/nvme_stubs.o 00:05:13.274 CC lib/blob/zeroes.o 00:05:13.274 CC lib/blob/blob_bs_dev.o 00:05:13.532 CC lib/init/subsystem.o 00:05:13.532 CC lib/init/subsystem_rpc.o 00:05:13.532 CC lib/init/rpc.o 00:05:13.532 CC lib/nvme/nvme_auth.o 00:05:13.532 CC lib/nvme/nvme_cuse.o 00:05:13.791 LIB libspdk_init.a 00:05:13.791 CC lib/virtio/virtio.o 00:05:13.791 SO libspdk_init.so.6.0 00:05:13.791 CC lib/virtio/virtio_vhost_user.o 00:05:13.791 SYMLINK libspdk_init.so 00:05:13.791 CC lib/nvme/nvme_rdma.o 00:05:13.791 CC lib/fsdev/fsdev.o 00:05:14.050 LIB libspdk_accel.a 00:05:14.050 CC lib/virtio/virtio_vfio_user.o 00:05:14.050 CC lib/virtio/virtio_pci.o 00:05:14.050 SO libspdk_accel.so.16.0 00:05:14.050 CC lib/fsdev/fsdev_io.o 00:05:14.050 SYMLINK libspdk_accel.so 00:05:14.050 CC lib/fsdev/fsdev_rpc.o 00:05:14.310 CC lib/event/app.o 00:05:14.310 CC lib/event/reactor.o 00:05:14.310 CC lib/event/log_rpc.o 00:05:14.310 LIB libspdk_virtio.a 00:05:14.310 SO libspdk_virtio.so.7.0 00:05:14.570 SYMLINK libspdk_virtio.so 00:05:14.570 CC lib/event/app_rpc.o 00:05:14.570 CC lib/event/scheduler_static.o 00:05:14.570 LIB libspdk_fsdev.a 00:05:14.570 SO libspdk_fsdev.so.2.0 00:05:14.570 CC lib/bdev/bdev.o 00:05:14.570 CC lib/bdev/bdev_rpc.o 00:05:14.570 CC lib/bdev/bdev_zone.o 00:05:14.570 CC lib/bdev/part.o 00:05:14.570 SYMLINK libspdk_fsdev.so 00:05:14.829 CC lib/bdev/scsi_nvme.o 00:05:14.829 LIB libspdk_event.a 00:05:14.829 SO libspdk_event.so.14.0 00:05:14.829 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:14.829 SYMLINK libspdk_event.so 00:05:15.398 LIB libspdk_nvme.a 00:05:15.398 SO libspdk_nvme.so.15.0 00:05:15.398 LIB libspdk_fuse_dispatcher.a 00:05:15.398 SO libspdk_fuse_dispatcher.so.1.0 00:05:15.663 SYMLINK libspdk_fuse_dispatcher.so 00:05:15.663 SYMLINK libspdk_nvme.so 00:05:16.608 LIB libspdk_blob.a 00:05:16.608 SO libspdk_blob.so.11.0 00:05:16.867 SYMLINK libspdk_blob.so 00:05:17.126 CC lib/lvol/lvol.o 00:05:17.126 CC lib/blobfs/blobfs.o 00:05:17.126 CC lib/blobfs/tree.o 00:05:17.386 LIB libspdk_bdev.a 00:05:17.646 SO libspdk_bdev.so.17.0 00:05:17.646 SYMLINK libspdk_bdev.so 00:05:17.906 CC lib/scsi/dev.o 00:05:17.906 CC lib/nvmf/ctrlr.o 00:05:17.906 CC lib/scsi/lun.o 00:05:17.906 CC lib/scsi/scsi.o 00:05:17.906 CC lib/scsi/port.o 00:05:17.906 CC lib/ublk/ublk.o 00:05:17.906 CC lib/ftl/ftl_core.o 00:05:17.906 CC lib/nbd/nbd.o 00:05:18.165 CC lib/scsi/scsi_bdev.o 00:05:18.165 CC lib/ublk/ublk_rpc.o 00:05:18.165 CC lib/nvmf/ctrlr_discovery.o 00:05:18.165 LIB libspdk_blobfs.a 00:05:18.165 CC lib/nvmf/ctrlr_bdev.o 00:05:18.165 SO libspdk_blobfs.so.10.0 00:05:18.165 CC lib/scsi/scsi_pr.o 00:05:18.424 SYMLINK libspdk_blobfs.so 00:05:18.424 CC lib/scsi/scsi_rpc.o 00:05:18.424 LIB libspdk_lvol.a 00:05:18.424 SO libspdk_lvol.so.10.0 00:05:18.424 CC lib/ftl/ftl_init.o 00:05:18.424 CC lib/nbd/nbd_rpc.o 00:05:18.424 SYMLINK libspdk_lvol.so 00:05:18.424 CC lib/scsi/task.o 00:05:18.424 CC lib/nvmf/subsystem.o 00:05:18.424 CC lib/ftl/ftl_layout.o 00:05:18.683 LIB libspdk_nbd.a 00:05:18.683 SO libspdk_nbd.so.7.0 00:05:18.683 LIB libspdk_ublk.a 00:05:18.683 CC lib/ftl/ftl_debug.o 00:05:18.683 CC lib/ftl/ftl_io.o 00:05:18.683 LIB libspdk_scsi.a 00:05:18.683 SO libspdk_ublk.so.3.0 00:05:18.683 SYMLINK libspdk_nbd.so 00:05:18.683 CC lib/nvmf/nvmf.o 00:05:18.683 CC lib/nvmf/nvmf_rpc.o 00:05:18.683 SO libspdk_scsi.so.9.0 00:05:18.683 SYMLINK libspdk_ublk.so 00:05:18.683 CC lib/nvmf/transport.o 00:05:18.683 SYMLINK libspdk_scsi.so 00:05:18.683 CC lib/nvmf/tcp.o 00:05:18.942 CC lib/ftl/ftl_sb.o 00:05:18.942 CC lib/ftl/ftl_l2p.o 00:05:18.942 CC lib/nvmf/stubs.o 00:05:18.942 CC lib/iscsi/conn.o 00:05:19.202 CC lib/iscsi/init_grp.o 00:05:19.202 CC lib/ftl/ftl_l2p_flat.o 00:05:19.460 CC lib/ftl/ftl_nv_cache.o 00:05:19.460 CC lib/ftl/ftl_band.o 00:05:19.460 CC lib/nvmf/mdns_server.o 00:05:19.460 CC lib/nvmf/rdma.o 00:05:19.719 CC lib/iscsi/iscsi.o 00:05:19.719 CC lib/iscsi/param.o 00:05:19.719 CC lib/iscsi/portal_grp.o 00:05:19.719 CC lib/ftl/ftl_band_ops.o 00:05:19.978 CC lib/nvmf/auth.o 00:05:19.978 CC lib/iscsi/tgt_node.o 00:05:19.978 CC lib/ftl/ftl_writer.o 00:05:19.978 CC lib/iscsi/iscsi_subsystem.o 00:05:19.978 CC lib/iscsi/iscsi_rpc.o 00:05:20.237 CC lib/ftl/ftl_rq.o 00:05:20.237 CC lib/ftl/ftl_reloc.o 00:05:20.496 CC lib/vhost/vhost.o 00:05:20.496 CC lib/iscsi/task.o 00:05:20.496 CC lib/vhost/vhost_rpc.o 00:05:20.496 CC lib/vhost/vhost_scsi.o 00:05:20.496 CC lib/vhost/vhost_blk.o 00:05:20.756 CC lib/vhost/rte_vhost_user.o 00:05:20.756 CC lib/ftl/ftl_l2p_cache.o 00:05:20.756 CC lib/ftl/ftl_p2l.o 00:05:20.756 CC lib/ftl/ftl_p2l_log.o 00:05:21.014 CC lib/ftl/mngt/ftl_mngt.o 00:05:21.014 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:21.274 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:21.274 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:21.274 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:21.274 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:21.274 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:21.535 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:21.535 LIB libspdk_iscsi.a 00:05:21.535 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:21.535 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:21.535 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:21.535 SO libspdk_iscsi.so.8.0 00:05:21.535 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:21.795 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:21.795 CC lib/ftl/utils/ftl_conf.o 00:05:21.795 CC lib/ftl/utils/ftl_md.o 00:05:21.795 SYMLINK libspdk_iscsi.so 00:05:21.795 CC lib/ftl/utils/ftl_mempool.o 00:05:21.795 CC lib/ftl/utils/ftl_bitmap.o 00:05:21.795 CC lib/ftl/utils/ftl_property.o 00:05:21.795 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:22.055 LIB libspdk_vhost.a 00:05:22.055 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:22.055 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:22.055 SO libspdk_vhost.so.8.0 00:05:22.055 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:22.055 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:22.055 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:22.055 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:22.055 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:22.055 SYMLINK libspdk_vhost.so 00:05:22.055 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:22.315 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:22.315 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:22.315 LIB libspdk_nvmf.a 00:05:22.315 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:22.315 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:22.315 CC lib/ftl/base/ftl_base_dev.o 00:05:22.315 CC lib/ftl/base/ftl_base_bdev.o 00:05:22.315 CC lib/ftl/ftl_trace.o 00:05:22.315 SO libspdk_nvmf.so.20.0 00:05:22.576 LIB libspdk_ftl.a 00:05:22.576 SYMLINK libspdk_nvmf.so 00:05:22.835 SO libspdk_ftl.so.9.0 00:05:23.095 SYMLINK libspdk_ftl.so 00:05:23.359 CC module/env_dpdk/env_dpdk_rpc.o 00:05:23.622 CC module/fsdev/aio/fsdev_aio.o 00:05:23.622 CC module/keyring/linux/keyring.o 00:05:23.622 CC module/accel/dsa/accel_dsa.o 00:05:23.622 CC module/blob/bdev/blob_bdev.o 00:05:23.622 CC module/accel/error/accel_error.o 00:05:23.622 CC module/keyring/file/keyring.o 00:05:23.622 CC module/sock/posix/posix.o 00:05:23.622 CC module/accel/ioat/accel_ioat.o 00:05:23.622 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:23.622 LIB libspdk_env_dpdk_rpc.a 00:05:23.622 SO libspdk_env_dpdk_rpc.so.6.0 00:05:23.622 CC module/keyring/linux/keyring_rpc.o 00:05:23.622 CC module/keyring/file/keyring_rpc.o 00:05:23.622 SYMLINK libspdk_env_dpdk_rpc.so 00:05:23.622 CC module/accel/ioat/accel_ioat_rpc.o 00:05:23.881 CC module/accel/error/accel_error_rpc.o 00:05:23.881 LIB libspdk_scheduler_dynamic.a 00:05:23.881 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:23.881 SO libspdk_scheduler_dynamic.so.4.0 00:05:23.881 LIB libspdk_keyring_linux.a 00:05:23.881 SO libspdk_keyring_linux.so.1.0 00:05:23.881 LIB libspdk_keyring_file.a 00:05:23.881 SYMLINK libspdk_scheduler_dynamic.so 00:05:23.881 CC module/accel/dsa/accel_dsa_rpc.o 00:05:23.881 LIB libspdk_accel_ioat.a 00:05:23.881 SO libspdk_keyring_file.so.2.0 00:05:23.881 SO libspdk_accel_ioat.so.6.0 00:05:23.881 LIB libspdk_blob_bdev.a 00:05:23.881 LIB libspdk_accel_error.a 00:05:23.881 SO libspdk_blob_bdev.so.11.0 00:05:23.881 SYMLINK libspdk_keyring_linux.so 00:05:23.881 SO libspdk_accel_error.so.2.0 00:05:23.881 SYMLINK libspdk_accel_ioat.so 00:05:23.881 SYMLINK libspdk_keyring_file.so 00:05:23.881 CC module/fsdev/aio/linux_aio_mgr.o 00:05:23.881 SYMLINK libspdk_blob_bdev.so 00:05:23.881 SYMLINK libspdk_accel_error.so 00:05:23.881 LIB libspdk_accel_dsa.a 00:05:24.141 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:24.141 SO libspdk_accel_dsa.so.5.0 00:05:24.141 CC module/scheduler/gscheduler/gscheduler.o 00:05:24.141 SYMLINK libspdk_accel_dsa.so 00:05:24.141 CC module/accel/iaa/accel_iaa.o 00:05:24.141 LIB libspdk_scheduler_dpdk_governor.a 00:05:24.141 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:24.141 CC module/bdev/delay/vbdev_delay.o 00:05:24.399 CC module/bdev/error/vbdev_error.o 00:05:24.399 LIB libspdk_scheduler_gscheduler.a 00:05:24.399 CC module/blobfs/bdev/blobfs_bdev.o 00:05:24.399 CC module/bdev/gpt/gpt.o 00:05:24.399 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:24.399 CC module/bdev/error/vbdev_error_rpc.o 00:05:24.399 SO libspdk_scheduler_gscheduler.so.4.0 00:05:24.399 LIB libspdk_fsdev_aio.a 00:05:24.399 CC module/bdev/lvol/vbdev_lvol.o 00:05:24.399 CC module/accel/iaa/accel_iaa_rpc.o 00:05:24.399 SYMLINK libspdk_scheduler_gscheduler.so 00:05:24.399 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:24.399 SO libspdk_fsdev_aio.so.1.0 00:05:24.399 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:24.399 SYMLINK libspdk_fsdev_aio.so 00:05:24.400 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:24.400 CC module/bdev/gpt/vbdev_gpt.o 00:05:24.400 LIB libspdk_accel_iaa.a 00:05:24.659 LIB libspdk_sock_posix.a 00:05:24.659 SO libspdk_accel_iaa.so.3.0 00:05:24.659 LIB libspdk_blobfs_bdev.a 00:05:24.659 SO libspdk_sock_posix.so.6.0 00:05:24.659 LIB libspdk_bdev_error.a 00:05:24.659 SO libspdk_blobfs_bdev.so.6.0 00:05:24.659 SYMLINK libspdk_accel_iaa.so 00:05:24.659 SO libspdk_bdev_error.so.6.0 00:05:24.659 SYMLINK libspdk_blobfs_bdev.so 00:05:24.659 LIB libspdk_bdev_delay.a 00:05:24.659 SYMLINK libspdk_sock_posix.so 00:05:24.659 SYMLINK libspdk_bdev_error.so 00:05:24.659 CC module/bdev/malloc/bdev_malloc.o 00:05:24.659 SO libspdk_bdev_delay.so.6.0 00:05:24.918 SYMLINK libspdk_bdev_delay.so 00:05:24.918 LIB libspdk_bdev_gpt.a 00:05:24.918 CC module/bdev/null/bdev_null.o 00:05:24.918 SO libspdk_bdev_gpt.so.6.0 00:05:24.918 CC module/bdev/nvme/bdev_nvme.o 00:05:24.918 CC module/bdev/passthru/vbdev_passthru.o 00:05:24.918 CC module/bdev/split/vbdev_split.o 00:05:24.918 CC module/bdev/raid/bdev_raid.o 00:05:24.918 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:24.918 SYMLINK libspdk_bdev_gpt.so 00:05:24.918 CC module/bdev/raid/bdev_raid_rpc.o 00:05:24.918 LIB libspdk_bdev_lvol.a 00:05:24.918 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:24.919 SO libspdk_bdev_lvol.so.6.0 00:05:25.177 SYMLINK libspdk_bdev_lvol.so 00:05:25.177 CC module/bdev/raid/bdev_raid_sb.o 00:05:25.177 CC module/bdev/split/vbdev_split_rpc.o 00:05:25.177 CC module/bdev/null/bdev_null_rpc.o 00:05:25.177 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:25.177 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:25.177 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:25.177 LIB libspdk_bdev_split.a 00:05:25.177 SO libspdk_bdev_split.so.6.0 00:05:25.438 LIB libspdk_bdev_null.a 00:05:25.438 LIB libspdk_bdev_malloc.a 00:05:25.438 CC module/bdev/nvme/nvme_rpc.o 00:05:25.438 LIB libspdk_bdev_zone_block.a 00:05:25.438 SO libspdk_bdev_null.so.6.0 00:05:25.438 SO libspdk_bdev_malloc.so.6.0 00:05:25.438 LIB libspdk_bdev_passthru.a 00:05:25.438 SO libspdk_bdev_zone_block.so.6.0 00:05:25.438 SYMLINK libspdk_bdev_split.so 00:05:25.438 SO libspdk_bdev_passthru.so.6.0 00:05:25.438 CC module/bdev/nvme/bdev_mdns_client.o 00:05:25.438 SYMLINK libspdk_bdev_null.so 00:05:25.438 SYMLINK libspdk_bdev_malloc.so 00:05:25.438 CC module/bdev/nvme/vbdev_opal.o 00:05:25.438 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:25.438 SYMLINK libspdk_bdev_zone_block.so 00:05:25.438 SYMLINK libspdk_bdev_passthru.so 00:05:25.698 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:25.698 CC module/bdev/aio/bdev_aio.o 00:05:25.698 CC module/bdev/ftl/bdev_ftl.o 00:05:25.698 CC module/bdev/aio/bdev_aio_rpc.o 00:05:25.698 CC module/bdev/iscsi/bdev_iscsi.o 00:05:25.698 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:25.698 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:25.698 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:25.698 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:25.958 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:25.958 CC module/bdev/raid/raid0.o 00:05:25.958 CC module/bdev/raid/raid1.o 00:05:25.958 LIB libspdk_bdev_ftl.a 00:05:25.958 LIB libspdk_bdev_aio.a 00:05:25.958 SO libspdk_bdev_ftl.so.6.0 00:05:25.958 SO libspdk_bdev_aio.so.6.0 00:05:25.958 CC module/bdev/raid/concat.o 00:05:25.958 SYMLINK libspdk_bdev_ftl.so 00:05:25.958 SYMLINK libspdk_bdev_aio.so 00:05:25.958 CC module/bdev/raid/raid5f.o 00:05:26.218 LIB libspdk_bdev_iscsi.a 00:05:26.218 SO libspdk_bdev_iscsi.so.6.0 00:05:26.218 SYMLINK libspdk_bdev_iscsi.so 00:05:26.479 LIB libspdk_bdev_virtio.a 00:05:26.479 SO libspdk_bdev_virtio.so.6.0 00:05:26.479 SYMLINK libspdk_bdev_virtio.so 00:05:26.739 LIB libspdk_bdev_raid.a 00:05:26.739 SO libspdk_bdev_raid.so.6.0 00:05:26.999 SYMLINK libspdk_bdev_raid.so 00:05:27.938 LIB libspdk_bdev_nvme.a 00:05:27.938 SO libspdk_bdev_nvme.so.7.1 00:05:27.938 SYMLINK libspdk_bdev_nvme.so 00:05:28.878 CC module/event/subsystems/keyring/keyring.o 00:05:28.878 CC module/event/subsystems/iobuf/iobuf.o 00:05:28.878 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:28.878 CC module/event/subsystems/fsdev/fsdev.o 00:05:28.878 CC module/event/subsystems/scheduler/scheduler.o 00:05:28.878 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:28.878 CC module/event/subsystems/vmd/vmd.o 00:05:28.878 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:28.878 CC module/event/subsystems/sock/sock.o 00:05:28.878 LIB libspdk_event_keyring.a 00:05:28.878 LIB libspdk_event_vmd.a 00:05:28.878 LIB libspdk_event_fsdev.a 00:05:28.878 LIB libspdk_event_scheduler.a 00:05:28.878 LIB libspdk_event_vhost_blk.a 00:05:28.878 SO libspdk_event_keyring.so.1.0 00:05:28.878 LIB libspdk_event_iobuf.a 00:05:28.878 SO libspdk_event_vmd.so.6.0 00:05:28.878 SO libspdk_event_fsdev.so.1.0 00:05:28.878 SO libspdk_event_scheduler.so.4.0 00:05:28.878 SO libspdk_event_vhost_blk.so.3.0 00:05:28.878 LIB libspdk_event_sock.a 00:05:28.878 SO libspdk_event_iobuf.so.3.0 00:05:28.878 SYMLINK libspdk_event_keyring.so 00:05:28.878 SO libspdk_event_sock.so.5.0 00:05:28.878 SYMLINK libspdk_event_fsdev.so 00:05:28.878 SYMLINK libspdk_event_vmd.so 00:05:28.878 SYMLINK libspdk_event_vhost_blk.so 00:05:28.878 SYMLINK libspdk_event_scheduler.so 00:05:28.878 SYMLINK libspdk_event_iobuf.so 00:05:28.878 SYMLINK libspdk_event_sock.so 00:05:29.448 CC module/event/subsystems/accel/accel.o 00:05:29.448 LIB libspdk_event_accel.a 00:05:29.448 SO libspdk_event_accel.so.6.0 00:05:29.708 SYMLINK libspdk_event_accel.so 00:05:29.969 CC module/event/subsystems/bdev/bdev.o 00:05:30.229 LIB libspdk_event_bdev.a 00:05:30.229 SO libspdk_event_bdev.so.6.0 00:05:30.229 SYMLINK libspdk_event_bdev.so 00:05:30.797 CC module/event/subsystems/scsi/scsi.o 00:05:30.797 CC module/event/subsystems/ublk/ublk.o 00:05:30.797 CC module/event/subsystems/nbd/nbd.o 00:05:30.797 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:30.797 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:30.797 LIB libspdk_event_ublk.a 00:05:30.797 LIB libspdk_event_nbd.a 00:05:30.797 SO libspdk_event_ublk.so.3.0 00:05:30.797 LIB libspdk_event_scsi.a 00:05:30.797 SO libspdk_event_nbd.so.6.0 00:05:30.797 SO libspdk_event_scsi.so.6.0 00:05:30.797 SYMLINK libspdk_event_ublk.so 00:05:30.797 SYMLINK libspdk_event_nbd.so 00:05:30.797 LIB libspdk_event_nvmf.a 00:05:30.797 SYMLINK libspdk_event_scsi.so 00:05:31.057 SO libspdk_event_nvmf.so.6.0 00:05:31.057 SYMLINK libspdk_event_nvmf.so 00:05:31.344 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:31.344 CC module/event/subsystems/iscsi/iscsi.o 00:05:31.344 LIB libspdk_event_vhost_scsi.a 00:05:31.622 SO libspdk_event_vhost_scsi.so.3.0 00:05:31.622 LIB libspdk_event_iscsi.a 00:05:31.622 SO libspdk_event_iscsi.so.6.0 00:05:31.622 SYMLINK libspdk_event_vhost_scsi.so 00:05:31.622 SYMLINK libspdk_event_iscsi.so 00:05:31.882 SO libspdk.so.6.0 00:05:31.882 SYMLINK libspdk.so 00:05:32.143 CXX app/trace/trace.o 00:05:32.143 CC app/spdk_lspci/spdk_lspci.o 00:05:32.143 CC app/trace_record/trace_record.o 00:05:32.143 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:32.143 CC app/nvmf_tgt/nvmf_main.o 00:05:32.143 CC app/iscsi_tgt/iscsi_tgt.o 00:05:32.143 CC app/spdk_tgt/spdk_tgt.o 00:05:32.143 CC examples/ioat/perf/perf.o 00:05:32.143 CC test/thread/poller_perf/poller_perf.o 00:05:32.403 CC examples/util/zipf/zipf.o 00:05:32.403 LINK spdk_lspci 00:05:32.403 LINK nvmf_tgt 00:05:32.403 LINK interrupt_tgt 00:05:32.403 LINK poller_perf 00:05:32.403 LINK spdk_trace_record 00:05:32.403 LINK iscsi_tgt 00:05:32.403 LINK spdk_tgt 00:05:32.403 LINK zipf 00:05:32.403 LINK ioat_perf 00:05:32.663 CC app/spdk_nvme_perf/perf.o 00:05:32.663 LINK spdk_trace 00:05:32.663 CC app/spdk_nvme_identify/identify.o 00:05:32.663 TEST_HEADER include/spdk/accel.h 00:05:32.663 TEST_HEADER include/spdk/accel_module.h 00:05:32.663 TEST_HEADER include/spdk/assert.h 00:05:32.663 TEST_HEADER include/spdk/barrier.h 00:05:32.663 TEST_HEADER include/spdk/base64.h 00:05:32.663 TEST_HEADER include/spdk/bdev.h 00:05:32.663 TEST_HEADER include/spdk/bdev_module.h 00:05:32.663 TEST_HEADER include/spdk/bdev_zone.h 00:05:32.663 TEST_HEADER include/spdk/bit_array.h 00:05:32.663 TEST_HEADER include/spdk/bit_pool.h 00:05:32.663 TEST_HEADER include/spdk/blob_bdev.h 00:05:32.663 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:32.663 TEST_HEADER include/spdk/blobfs.h 00:05:32.663 TEST_HEADER include/spdk/blob.h 00:05:32.663 TEST_HEADER include/spdk/conf.h 00:05:32.663 TEST_HEADER include/spdk/config.h 00:05:32.663 CC app/spdk_nvme_discover/discovery_aer.o 00:05:32.663 TEST_HEADER include/spdk/cpuset.h 00:05:32.663 TEST_HEADER include/spdk/crc16.h 00:05:32.663 TEST_HEADER include/spdk/crc32.h 00:05:32.663 TEST_HEADER include/spdk/crc64.h 00:05:32.663 TEST_HEADER include/spdk/dif.h 00:05:32.663 TEST_HEADER include/spdk/dma.h 00:05:32.663 TEST_HEADER include/spdk/endian.h 00:05:32.663 TEST_HEADER include/spdk/env_dpdk.h 00:05:32.663 TEST_HEADER include/spdk/env.h 00:05:32.663 TEST_HEADER include/spdk/event.h 00:05:32.663 TEST_HEADER include/spdk/fd_group.h 00:05:32.663 TEST_HEADER include/spdk/fd.h 00:05:32.922 TEST_HEADER include/spdk/file.h 00:05:32.922 TEST_HEADER include/spdk/fsdev.h 00:05:32.922 TEST_HEADER include/spdk/fsdev_module.h 00:05:32.922 CC examples/ioat/verify/verify.o 00:05:32.922 TEST_HEADER include/spdk/ftl.h 00:05:32.922 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:32.922 TEST_HEADER include/spdk/gpt_spec.h 00:05:32.922 TEST_HEADER include/spdk/hexlify.h 00:05:32.922 TEST_HEADER include/spdk/histogram_data.h 00:05:32.922 TEST_HEADER include/spdk/idxd.h 00:05:32.922 CC app/spdk_top/spdk_top.o 00:05:32.922 TEST_HEADER include/spdk/idxd_spec.h 00:05:32.922 TEST_HEADER include/spdk/init.h 00:05:32.922 TEST_HEADER include/spdk/ioat.h 00:05:32.922 TEST_HEADER include/spdk/ioat_spec.h 00:05:32.922 CC test/dma/test_dma/test_dma.o 00:05:32.922 TEST_HEADER include/spdk/iscsi_spec.h 00:05:32.922 TEST_HEADER include/spdk/json.h 00:05:32.922 TEST_HEADER include/spdk/jsonrpc.h 00:05:32.922 TEST_HEADER include/spdk/keyring.h 00:05:32.923 CC test/app/bdev_svc/bdev_svc.o 00:05:32.923 TEST_HEADER include/spdk/keyring_module.h 00:05:32.923 TEST_HEADER include/spdk/likely.h 00:05:32.923 TEST_HEADER include/spdk/log.h 00:05:32.923 TEST_HEADER include/spdk/lvol.h 00:05:32.923 TEST_HEADER include/spdk/md5.h 00:05:32.923 TEST_HEADER include/spdk/memory.h 00:05:32.923 TEST_HEADER include/spdk/mmio.h 00:05:32.923 TEST_HEADER include/spdk/nbd.h 00:05:32.923 TEST_HEADER include/spdk/net.h 00:05:32.923 TEST_HEADER include/spdk/notify.h 00:05:32.923 TEST_HEADER include/spdk/nvme.h 00:05:32.923 TEST_HEADER include/spdk/nvme_intel.h 00:05:32.923 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:32.923 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:32.923 TEST_HEADER include/spdk/nvme_spec.h 00:05:32.923 TEST_HEADER include/spdk/nvme_zns.h 00:05:32.923 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:32.923 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:32.923 TEST_HEADER include/spdk/nvmf.h 00:05:32.923 TEST_HEADER include/spdk/nvmf_spec.h 00:05:32.923 TEST_HEADER include/spdk/nvmf_transport.h 00:05:32.923 TEST_HEADER include/spdk/opal.h 00:05:32.923 TEST_HEADER include/spdk/opal_spec.h 00:05:32.923 TEST_HEADER include/spdk/pci_ids.h 00:05:32.923 TEST_HEADER include/spdk/pipe.h 00:05:32.923 TEST_HEADER include/spdk/queue.h 00:05:32.923 TEST_HEADER include/spdk/reduce.h 00:05:32.923 TEST_HEADER include/spdk/rpc.h 00:05:32.923 TEST_HEADER include/spdk/scheduler.h 00:05:32.923 TEST_HEADER include/spdk/scsi.h 00:05:32.923 TEST_HEADER include/spdk/scsi_spec.h 00:05:32.923 TEST_HEADER include/spdk/sock.h 00:05:32.923 TEST_HEADER include/spdk/stdinc.h 00:05:32.923 TEST_HEADER include/spdk/string.h 00:05:32.923 TEST_HEADER include/spdk/thread.h 00:05:32.923 TEST_HEADER include/spdk/trace.h 00:05:32.923 TEST_HEADER include/spdk/trace_parser.h 00:05:32.923 TEST_HEADER include/spdk/tree.h 00:05:32.923 TEST_HEADER include/spdk/ublk.h 00:05:32.923 TEST_HEADER include/spdk/util.h 00:05:32.923 TEST_HEADER include/spdk/uuid.h 00:05:32.923 TEST_HEADER include/spdk/version.h 00:05:32.923 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:32.923 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:32.923 TEST_HEADER include/spdk/vhost.h 00:05:32.923 TEST_HEADER include/spdk/vmd.h 00:05:32.923 CC app/spdk_dd/spdk_dd.o 00:05:32.923 TEST_HEADER include/spdk/xor.h 00:05:32.923 TEST_HEADER include/spdk/zipf.h 00:05:32.923 CXX test/cpp_headers/accel.o 00:05:32.923 CC test/env/mem_callbacks/mem_callbacks.o 00:05:32.923 LINK spdk_nvme_discover 00:05:32.923 LINK bdev_svc 00:05:32.923 LINK verify 00:05:33.183 CXX test/cpp_headers/accel_module.o 00:05:33.183 CXX test/cpp_headers/assert.o 00:05:33.441 LINK spdk_dd 00:05:33.441 CC examples/thread/thread/thread_ex.o 00:05:33.441 LINK test_dma 00:05:33.441 CC examples/sock/hello_world/hello_sock.o 00:05:33.441 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:33.441 CXX test/cpp_headers/barrier.o 00:05:33.441 LINK spdk_nvme_perf 00:05:33.441 LINK mem_callbacks 00:05:33.441 CXX test/cpp_headers/base64.o 00:05:33.699 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:33.699 LINK thread 00:05:33.699 LINK hello_sock 00:05:33.699 CXX test/cpp_headers/bdev.o 00:05:33.699 CC test/env/vtophys/vtophys.o 00:05:33.699 CC app/fio/nvme/fio_plugin.o 00:05:33.959 CC app/fio/bdev/fio_plugin.o 00:05:33.959 LINK nvme_fuzz 00:05:33.959 LINK spdk_nvme_identify 00:05:33.959 CXX test/cpp_headers/bdev_module.o 00:05:33.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:33.959 LINK spdk_top 00:05:33.959 LINK vtophys 00:05:33.959 CXX test/cpp_headers/bdev_zone.o 00:05:33.959 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:34.217 CC examples/vmd/lsvmd/lsvmd.o 00:05:34.217 CXX test/cpp_headers/bit_array.o 00:05:34.217 CC test/app/histogram_perf/histogram_perf.o 00:05:34.217 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:34.217 CXX test/cpp_headers/bit_pool.o 00:05:34.217 LINK lsvmd 00:05:34.217 CXX test/cpp_headers/blob_bdev.o 00:05:34.217 CC examples/idxd/perf/perf.o 00:05:34.475 LINK histogram_perf 00:05:34.476 LINK env_dpdk_post_init 00:05:34.476 LINK spdk_bdev 00:05:34.476 LINK spdk_nvme 00:05:34.476 CXX test/cpp_headers/blobfs_bdev.o 00:05:34.476 CC examples/vmd/led/led.o 00:05:34.476 CC app/vhost/vhost.o 00:05:34.476 LINK vhost_fuzz 00:05:34.734 CC test/env/memory/memory_ut.o 00:05:34.734 CC test/env/pci/pci_ut.o 00:05:34.734 CC test/app/jsoncat/jsoncat.o 00:05:34.734 LINK led 00:05:34.734 CXX test/cpp_headers/blobfs.o 00:05:34.734 CC test/app/stub/stub.o 00:05:34.734 LINK idxd_perf 00:05:34.734 LINK vhost 00:05:34.734 CXX test/cpp_headers/blob.o 00:05:34.734 LINK jsoncat 00:05:34.992 CXX test/cpp_headers/conf.o 00:05:34.992 LINK stub 00:05:34.992 CXX test/cpp_headers/config.o 00:05:34.992 CXX test/cpp_headers/cpuset.o 00:05:34.992 CXX test/cpp_headers/crc16.o 00:05:34.992 CC test/event/event_perf/event_perf.o 00:05:34.992 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:35.252 CC test/nvme/aer/aer.o 00:05:35.252 LINK pci_ut 00:05:35.252 CC test/nvme/reset/reset.o 00:05:35.252 CC examples/accel/perf/accel_perf.o 00:05:35.252 LINK event_perf 00:05:35.252 CC test/event/reactor/reactor.o 00:05:35.252 CXX test/cpp_headers/crc32.o 00:05:35.510 LINK reactor 00:05:35.510 LINK hello_fsdev 00:05:35.510 CXX test/cpp_headers/crc64.o 00:05:35.510 CXX test/cpp_headers/dif.o 00:05:35.510 LINK reset 00:05:35.510 LINK aer 00:05:35.510 CXX test/cpp_headers/dma.o 00:05:35.510 LINK iscsi_fuzz 00:05:35.768 CC test/event/reactor_perf/reactor_perf.o 00:05:35.768 CC test/rpc_client/rpc_client_test.o 00:05:35.768 CC test/nvme/sgl/sgl.o 00:05:35.768 CC test/event/app_repeat/app_repeat.o 00:05:35.768 CXX test/cpp_headers/endian.o 00:05:35.768 LINK reactor_perf 00:05:35.768 LINK accel_perf 00:05:35.768 CXX test/cpp_headers/env_dpdk.o 00:05:36.026 CC test/accel/dif/dif.o 00:05:36.026 LINK rpc_client_test 00:05:36.026 LINK app_repeat 00:05:36.026 CC test/blobfs/mkfs/mkfs.o 00:05:36.026 LINK memory_ut 00:05:36.026 CXX test/cpp_headers/env.o 00:05:36.026 LINK sgl 00:05:36.026 CC examples/blob/hello_world/hello_blob.o 00:05:36.026 CC examples/blob/cli/blobcli.o 00:05:36.026 CXX test/cpp_headers/event.o 00:05:36.026 LINK mkfs 00:05:36.285 CXX test/cpp_headers/fd_group.o 00:05:36.285 CC test/event/scheduler/scheduler.o 00:05:36.285 CC test/lvol/esnap/esnap.o 00:05:36.285 LINK hello_blob 00:05:36.285 CC test/nvme/e2edp/nvme_dp.o 00:05:36.285 CC examples/nvme/hello_world/hello_world.o 00:05:36.285 CXX test/cpp_headers/fd.o 00:05:36.285 CC examples/nvme/reconnect/reconnect.o 00:05:36.543 LINK scheduler 00:05:36.543 CXX test/cpp_headers/file.o 00:05:36.543 CC examples/bdev/hello_world/hello_bdev.o 00:05:36.543 LINK hello_world 00:05:36.543 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:36.543 LINK blobcli 00:05:36.543 LINK nvme_dp 00:05:36.802 CXX test/cpp_headers/fsdev.o 00:05:36.802 CXX test/cpp_headers/fsdev_module.o 00:05:36.802 LINK dif 00:05:36.802 LINK reconnect 00:05:36.802 CXX test/cpp_headers/ftl.o 00:05:36.802 LINK hello_bdev 00:05:36.802 CC test/nvme/overhead/overhead.o 00:05:36.802 CC test/nvme/err_injection/err_injection.o 00:05:36.802 CC examples/nvme/arbitration/arbitration.o 00:05:37.061 CC examples/bdev/bdevperf/bdevperf.o 00:05:37.061 CC test/nvme/startup/startup.o 00:05:37.061 CXX test/cpp_headers/fuse_dispatcher.o 00:05:37.061 CC test/nvme/reserve/reserve.o 00:05:37.061 LINK err_injection 00:05:37.320 LINK overhead 00:05:37.320 LINK nvme_manage 00:05:37.320 CXX test/cpp_headers/gpt_spec.o 00:05:37.320 LINK startup 00:05:37.320 LINK arbitration 00:05:37.320 LINK reserve 00:05:37.320 CC test/bdev/bdevio/bdevio.o 00:05:37.320 CC test/nvme/simple_copy/simple_copy.o 00:05:37.320 CXX test/cpp_headers/hexlify.o 00:05:37.579 CC test/nvme/connect_stress/connect_stress.o 00:05:37.579 CC examples/nvme/hotplug/hotplug.o 00:05:37.579 CC test/nvme/boot_partition/boot_partition.o 00:05:37.579 CXX test/cpp_headers/histogram_data.o 00:05:37.579 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:37.579 CC test/nvme/compliance/nvme_compliance.o 00:05:37.579 LINK simple_copy 00:05:37.579 LINK connect_stress 00:05:37.838 CXX test/cpp_headers/idxd.o 00:05:37.838 LINK boot_partition 00:05:37.838 LINK hotplug 00:05:37.838 LINK cmb_copy 00:05:37.838 LINK bdevio 00:05:37.838 CC test/nvme/fused_ordering/fused_ordering.o 00:05:37.838 CXX test/cpp_headers/idxd_spec.o 00:05:37.838 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:38.096 CXX test/cpp_headers/init.o 00:05:38.096 LINK nvme_compliance 00:05:38.096 CC test/nvme/fdp/fdp.o 00:05:38.096 CC examples/nvme/abort/abort.o 00:05:38.096 LINK bdevperf 00:05:38.096 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:38.096 LINK fused_ordering 00:05:38.096 LINK doorbell_aers 00:05:38.096 CXX test/cpp_headers/ioat.o 00:05:38.096 CC test/nvme/cuse/cuse.o 00:05:38.096 CXX test/cpp_headers/ioat_spec.o 00:05:38.355 LINK pmr_persistence 00:05:38.355 CXX test/cpp_headers/iscsi_spec.o 00:05:38.355 CXX test/cpp_headers/json.o 00:05:38.355 CXX test/cpp_headers/jsonrpc.o 00:05:38.355 CXX test/cpp_headers/keyring.o 00:05:38.355 CXX test/cpp_headers/keyring_module.o 00:05:38.355 LINK fdp 00:05:38.355 CXX test/cpp_headers/likely.o 00:05:38.355 LINK abort 00:05:38.355 CXX test/cpp_headers/log.o 00:05:38.355 CXX test/cpp_headers/lvol.o 00:05:38.614 CXX test/cpp_headers/md5.o 00:05:38.614 CXX test/cpp_headers/memory.o 00:05:38.614 CXX test/cpp_headers/mmio.o 00:05:38.614 CXX test/cpp_headers/nbd.o 00:05:38.614 CXX test/cpp_headers/net.o 00:05:38.614 CXX test/cpp_headers/notify.o 00:05:38.614 CXX test/cpp_headers/nvme.o 00:05:38.614 CXX test/cpp_headers/nvme_intel.o 00:05:38.614 CXX test/cpp_headers/nvme_ocssd.o 00:05:38.614 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:38.614 CXX test/cpp_headers/nvme_spec.o 00:05:38.873 CXX test/cpp_headers/nvme_zns.o 00:05:38.873 CXX test/cpp_headers/nvmf_cmd.o 00:05:38.873 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:38.873 CXX test/cpp_headers/nvmf.o 00:05:38.873 CC examples/nvmf/nvmf/nvmf.o 00:05:38.873 CXX test/cpp_headers/nvmf_spec.o 00:05:38.873 CXX test/cpp_headers/nvmf_transport.o 00:05:38.873 CXX test/cpp_headers/opal.o 00:05:38.873 CXX test/cpp_headers/opal_spec.o 00:05:38.873 CXX test/cpp_headers/pci_ids.o 00:05:39.144 CXX test/cpp_headers/pipe.o 00:05:39.144 CXX test/cpp_headers/queue.o 00:05:39.144 CXX test/cpp_headers/reduce.o 00:05:39.144 CXX test/cpp_headers/rpc.o 00:05:39.144 CXX test/cpp_headers/scheduler.o 00:05:39.144 CXX test/cpp_headers/scsi.o 00:05:39.144 CXX test/cpp_headers/scsi_spec.o 00:05:39.144 LINK nvmf 00:05:39.144 CXX test/cpp_headers/sock.o 00:05:39.144 CXX test/cpp_headers/stdinc.o 00:05:39.144 CXX test/cpp_headers/string.o 00:05:39.430 CXX test/cpp_headers/thread.o 00:05:39.430 CXX test/cpp_headers/trace.o 00:05:39.430 CXX test/cpp_headers/trace_parser.o 00:05:39.430 CXX test/cpp_headers/tree.o 00:05:39.430 CXX test/cpp_headers/ublk.o 00:05:39.430 CXX test/cpp_headers/util.o 00:05:39.430 CXX test/cpp_headers/uuid.o 00:05:39.430 CXX test/cpp_headers/version.o 00:05:39.430 CXX test/cpp_headers/vfio_user_pci.o 00:05:39.430 CXX test/cpp_headers/vfio_user_spec.o 00:05:39.430 CXX test/cpp_headers/vhost.o 00:05:39.430 CXX test/cpp_headers/vmd.o 00:05:39.430 CXX test/cpp_headers/xor.o 00:05:39.430 CXX test/cpp_headers/zipf.o 00:05:39.689 LINK cuse 00:05:42.228 LINK esnap 00:05:42.488 00:05:42.488 real 1m19.371s 00:05:42.488 user 6m16.140s 00:05:42.488 sys 1m11.959s 00:05:42.488 15:14:48 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:42.488 15:14:48 make -- common/autotest_common.sh@10 -- $ set +x 00:05:42.488 ************************************ 00:05:42.488 END TEST make 00:05:42.488 ************************************ 00:05:42.488 15:14:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:42.488 15:14:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:42.488 15:14:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:42.488 15:14:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.488 15:14:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:42.488 15:14:48 -- pm/common@44 -- $ pid=6201 00:05:42.488 15:14:48 -- pm/common@50 -- $ kill -TERM 6201 00:05:42.488 15:14:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.488 15:14:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:42.488 15:14:48 -- pm/common@44 -- $ pid=6203 00:05:42.488 15:14:48 -- pm/common@50 -- $ kill -TERM 6203 00:05:42.488 15:14:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:42.488 15:14:48 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:42.748 15:14:48 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:42.748 15:14:48 -- common/autotest_common.sh@1691 -- # lcov --version 00:05:42.748 15:14:48 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:42.748 15:14:49 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:42.748 15:14:49 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.748 15:14:49 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.748 15:14:49 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.748 15:14:49 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.748 15:14:49 -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.748 15:14:49 -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.748 15:14:49 -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.748 15:14:49 -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.748 15:14:49 -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.748 15:14:49 -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.748 15:14:49 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.748 15:14:49 -- scripts/common.sh@344 -- # case "$op" in 00:05:42.748 15:14:49 -- scripts/common.sh@345 -- # : 1 00:05:42.748 15:14:49 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.748 15:14:49 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.748 15:14:49 -- scripts/common.sh@365 -- # decimal 1 00:05:42.748 15:14:49 -- scripts/common.sh@353 -- # local d=1 00:05:42.748 15:14:49 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.748 15:14:49 -- scripts/common.sh@355 -- # echo 1 00:05:42.748 15:14:49 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.748 15:14:49 -- scripts/common.sh@366 -- # decimal 2 00:05:42.748 15:14:49 -- scripts/common.sh@353 -- # local d=2 00:05:42.748 15:14:49 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.748 15:14:49 -- scripts/common.sh@355 -- # echo 2 00:05:42.748 15:14:49 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.748 15:14:49 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.748 15:14:49 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.748 15:14:49 -- scripts/common.sh@368 -- # return 0 00:05:42.748 15:14:49 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.748 15:14:49 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:42.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.748 --rc genhtml_branch_coverage=1 00:05:42.748 --rc genhtml_function_coverage=1 00:05:42.748 --rc genhtml_legend=1 00:05:42.748 --rc geninfo_all_blocks=1 00:05:42.748 --rc geninfo_unexecuted_blocks=1 00:05:42.748 00:05:42.748 ' 00:05:42.748 15:14:49 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:42.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.748 --rc genhtml_branch_coverage=1 00:05:42.748 --rc genhtml_function_coverage=1 00:05:42.748 --rc genhtml_legend=1 00:05:42.748 --rc geninfo_all_blocks=1 00:05:42.748 --rc geninfo_unexecuted_blocks=1 00:05:42.748 00:05:42.748 ' 00:05:42.748 15:14:49 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:42.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.748 --rc genhtml_branch_coverage=1 00:05:42.748 --rc genhtml_function_coverage=1 00:05:42.748 --rc genhtml_legend=1 00:05:42.748 --rc geninfo_all_blocks=1 00:05:42.748 --rc geninfo_unexecuted_blocks=1 00:05:42.748 00:05:42.748 ' 00:05:42.748 15:14:49 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:42.748 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.748 --rc genhtml_branch_coverage=1 00:05:42.748 --rc genhtml_function_coverage=1 00:05:42.748 --rc genhtml_legend=1 00:05:42.748 --rc geninfo_all_blocks=1 00:05:42.748 --rc geninfo_unexecuted_blocks=1 00:05:42.748 00:05:42.748 ' 00:05:42.748 15:14:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:42.748 15:14:49 -- nvmf/common.sh@7 -- # uname -s 00:05:42.748 15:14:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.748 15:14:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.748 15:14:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.748 15:14:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.748 15:14:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.748 15:14:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.748 15:14:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.748 15:14:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.748 15:14:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.748 15:14:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.748 15:14:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:126d9008-3427-4a83-8f0d-d857067534ac 00:05:42.748 15:14:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=126d9008-3427-4a83-8f0d-d857067534ac 00:05:42.748 15:14:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.748 15:14:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.748 15:14:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.748 15:14:49 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.748 15:14:49 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.748 15:14:49 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.748 15:14:49 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.748 15:14:49 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.748 15:14:49 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.748 15:14:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.748 15:14:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.748 15:14:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.748 15:14:49 -- paths/export.sh@5 -- # export PATH 00:05:42.748 15:14:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.748 15:14:49 -- nvmf/common.sh@51 -- # : 0 00:05:42.748 15:14:49 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.749 15:14:49 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.749 15:14:49 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.749 15:14:49 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.749 15:14:49 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.749 15:14:49 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.749 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.749 15:14:49 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.749 15:14:49 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.749 15:14:49 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.749 15:14:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:42.749 15:14:49 -- spdk/autotest.sh@32 -- # uname -s 00:05:42.749 15:14:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:42.749 15:14:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:42.749 15:14:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:43.008 15:14:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:43.008 15:14:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:43.008 15:14:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:43.008 15:14:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:43.008 15:14:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:43.008 15:14:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:43.008 15:14:49 -- spdk/autotest.sh@48 -- # udevadm_pid=68033 00:05:43.008 15:14:49 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:43.008 15:14:49 -- pm/common@17 -- # local monitor 00:05:43.008 15:14:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:43.008 15:14:49 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:43.008 15:14:49 -- pm/common@25 -- # sleep 1 00:05:43.008 15:14:49 -- pm/common@21 -- # date +%s 00:05:43.008 15:14:49 -- pm/common@21 -- # date +%s 00:05:43.008 15:14:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731251689 00:05:43.008 15:14:49 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731251689 00:05:43.008 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731251689_collect-cpu-load.pm.log 00:05:43.008 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731251689_collect-vmstat.pm.log 00:05:43.947 15:14:50 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:43.947 15:14:50 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:43.947 15:14:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.947 15:14:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.947 15:14:50 -- spdk/autotest.sh@59 -- # create_test_list 00:05:43.947 15:14:50 -- common/autotest_common.sh@750 -- # xtrace_disable 00:05:43.947 15:14:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.947 15:14:50 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:43.947 15:14:50 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:43.947 15:14:50 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:43.947 15:14:50 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:43.947 15:14:50 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:43.947 15:14:50 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:43.947 15:14:50 -- common/autotest_common.sh@1455 -- # uname 00:05:43.947 15:14:50 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:43.947 15:14:50 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:43.947 15:14:50 -- common/autotest_common.sh@1475 -- # uname 00:05:43.947 15:14:50 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:43.947 15:14:50 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:43.947 15:14:50 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:44.207 lcov: LCOV version 1.15 00:05:44.207 15:14:50 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:59.181 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:59.181 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:14.108 15:15:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:14.108 15:15:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:14.108 15:15:18 -- common/autotest_common.sh@10 -- # set +x 00:06:14.108 15:15:18 -- spdk/autotest.sh@78 -- # rm -f 00:06:14.108 15:15:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:14.108 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.108 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:14.108 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:14.108 15:15:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:14.108 15:15:19 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:14.108 15:15:19 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:14.108 15:15:19 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:14.108 15:15:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:14.108 15:15:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:14.108 15:15:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:14.108 15:15:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:14.108 15:15:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:14.108 15:15:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:14.108 15:15:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:14.108 15:15:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:14.108 15:15:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:14.108 15:15:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:14.108 15:15:19 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:14.108 15:15:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:14.108 15:15:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:14.108 15:15:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:14.108 15:15:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:14.108 15:15:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:14.108 15:15:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:14.109 15:15:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:14.109 15:15:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:14.109 15:15:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:14.109 No valid GPT data, bailing 00:06:14.109 15:15:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:14.109 15:15:19 -- scripts/common.sh@394 -- # pt= 00:06:14.109 15:15:19 -- scripts/common.sh@395 -- # return 1 00:06:14.109 15:15:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:14.109 1+0 records in 00:06:14.109 1+0 records out 00:06:14.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00686081 s, 153 MB/s 00:06:14.109 15:15:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:14.109 15:15:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:14.109 15:15:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:14.109 15:15:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:14.109 15:15:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:14.109 No valid GPT data, bailing 00:06:14.109 15:15:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:14.109 15:15:19 -- scripts/common.sh@394 -- # pt= 00:06:14.109 15:15:19 -- scripts/common.sh@395 -- # return 1 00:06:14.109 15:15:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:14.109 1+0 records in 00:06:14.109 1+0 records out 00:06:14.109 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0061676 s, 170 MB/s 00:06:14.110 15:15:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:14.110 15:15:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:14.110 15:15:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:14.110 15:15:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:14.110 15:15:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:14.110 No valid GPT data, bailing 00:06:14.110 15:15:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:14.110 15:15:19 -- scripts/common.sh@394 -- # pt= 00:06:14.110 15:15:19 -- scripts/common.sh@395 -- # return 1 00:06:14.110 15:15:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:14.110 1+0 records in 00:06:14.110 1+0 records out 00:06:14.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00640711 s, 164 MB/s 00:06:14.110 15:15:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:14.110 15:15:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:14.110 15:15:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:14.110 15:15:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:14.110 15:15:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:14.110 No valid GPT data, bailing 00:06:14.110 15:15:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:14.110 15:15:19 -- scripts/common.sh@394 -- # pt= 00:06:14.110 15:15:19 -- scripts/common.sh@395 -- # return 1 00:06:14.110 15:15:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:14.110 1+0 records in 00:06:14.110 1+0 records out 00:06:14.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610493 s, 172 MB/s 00:06:14.110 15:15:19 -- spdk/autotest.sh@105 -- # sync 00:06:14.110 15:15:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:14.110 15:15:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:14.110 15:15:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:17.411 15:15:23 -- spdk/autotest.sh@111 -- # uname -s 00:06:17.411 15:15:23 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:17.411 15:15:23 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:17.411 15:15:23 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:17.671 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.671 Hugepages 00:06:17.671 node hugesize free / total 00:06:17.671 node0 1048576kB 0 / 0 00:06:17.671 node0 2048kB 0 / 0 00:06:17.671 00:06:17.671 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:17.930 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:17.930 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:17.930 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:17.930 15:15:24 -- spdk/autotest.sh@117 -- # uname -s 00:06:17.930 15:15:24 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:17.930 15:15:24 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:17.930 15:15:24 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:18.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.904 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:18.904 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.164 15:15:25 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:20.103 15:15:26 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:20.103 15:15:26 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:20.103 15:15:26 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:20.103 15:15:26 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:20.103 15:15:26 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:20.103 15:15:26 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:20.103 15:15:26 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:20.103 15:15:26 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:20.103 15:15:26 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:20.103 15:15:26 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:20.103 15:15:26 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:20.103 15:15:26 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:20.672 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:20.672 Waiting for block devices as requested 00:06:20.672 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.932 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:20.932 15:15:27 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:20.932 15:15:27 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:20.932 15:15:27 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:20.932 15:15:27 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:20.932 15:15:27 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:20.932 15:15:27 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1541 -- # continue 00:06:20.932 15:15:27 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:20.932 15:15:27 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:20.932 15:15:27 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:20.932 15:15:27 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:20.932 15:15:27 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:20.932 15:15:27 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:20.932 15:15:27 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:20.932 15:15:27 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:20.932 15:15:27 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:20.932 15:15:27 -- common/autotest_common.sh@1541 -- # continue 00:06:20.932 15:15:27 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:20.932 15:15:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:20.932 15:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.932 15:15:27 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:20.932 15:15:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:20.932 15:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:21.192 15:15:27 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:21.761 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:22.020 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.020 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:22.020 15:15:28 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:22.020 15:15:28 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:22.020 15:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.020 15:15:28 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:22.020 15:15:28 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:22.280 15:15:28 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:22.280 15:15:28 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:22.280 15:15:28 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:22.280 15:15:28 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:22.280 15:15:28 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:22.280 15:15:28 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:22.280 15:15:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:22.280 15:15:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:22.280 15:15:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:22.280 15:15:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:22.280 15:15:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:22.280 15:15:28 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:22.280 15:15:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:22.280 15:15:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:22.280 15:15:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:22.280 15:15:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:22.280 15:15:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:22.280 15:15:28 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:22.280 15:15:28 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:22.280 15:15:28 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:22.280 15:15:28 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:22.280 15:15:28 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:22.280 15:15:28 -- common/autotest_common.sh@1570 -- # return 0 00:06:22.280 15:15:28 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:22.280 15:15:28 -- common/autotest_common.sh@1578 -- # return 0 00:06:22.280 15:15:28 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:22.280 15:15:28 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:22.280 15:15:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:22.280 15:15:28 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:22.280 15:15:28 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:22.280 15:15:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:22.280 15:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.280 15:15:28 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:22.280 15:15:28 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:22.280 15:15:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.280 15:15:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.280 15:15:28 -- common/autotest_common.sh@10 -- # set +x 00:06:22.280 ************************************ 00:06:22.280 START TEST env 00:06:22.280 ************************************ 00:06:22.280 15:15:28 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:22.280 * Looking for test storage... 00:06:22.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.540 15:15:28 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.540 15:15:28 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.540 15:15:28 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.540 15:15:28 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.540 15:15:28 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.540 15:15:28 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.540 15:15:28 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.540 15:15:28 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.540 15:15:28 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.540 15:15:28 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.540 15:15:28 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.540 15:15:28 env -- scripts/common.sh@344 -- # case "$op" in 00:06:22.540 15:15:28 env -- scripts/common.sh@345 -- # : 1 00:06:22.540 15:15:28 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.540 15:15:28 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.540 15:15:28 env -- scripts/common.sh@365 -- # decimal 1 00:06:22.540 15:15:28 env -- scripts/common.sh@353 -- # local d=1 00:06:22.540 15:15:28 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.540 15:15:28 env -- scripts/common.sh@355 -- # echo 1 00:06:22.540 15:15:28 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.540 15:15:28 env -- scripts/common.sh@366 -- # decimal 2 00:06:22.540 15:15:28 env -- scripts/common.sh@353 -- # local d=2 00:06:22.540 15:15:28 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.540 15:15:28 env -- scripts/common.sh@355 -- # echo 2 00:06:22.540 15:15:28 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.540 15:15:28 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.540 15:15:28 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.540 15:15:28 env -- scripts/common.sh@368 -- # return 0 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.540 --rc genhtml_branch_coverage=1 00:06:22.540 --rc genhtml_function_coverage=1 00:06:22.540 --rc genhtml_legend=1 00:06:22.540 --rc geninfo_all_blocks=1 00:06:22.540 --rc geninfo_unexecuted_blocks=1 00:06:22.540 00:06:22.540 ' 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.540 --rc genhtml_branch_coverage=1 00:06:22.540 --rc genhtml_function_coverage=1 00:06:22.540 --rc genhtml_legend=1 00:06:22.540 --rc geninfo_all_blocks=1 00:06:22.540 --rc geninfo_unexecuted_blocks=1 00:06:22.540 00:06:22.540 ' 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.540 --rc genhtml_branch_coverage=1 00:06:22.540 --rc genhtml_function_coverage=1 00:06:22.540 --rc genhtml_legend=1 00:06:22.540 --rc geninfo_all_blocks=1 00:06:22.540 --rc geninfo_unexecuted_blocks=1 00:06:22.540 00:06:22.540 ' 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.540 --rc genhtml_branch_coverage=1 00:06:22.540 --rc genhtml_function_coverage=1 00:06:22.540 --rc genhtml_legend=1 00:06:22.540 --rc geninfo_all_blocks=1 00:06:22.540 --rc geninfo_unexecuted_blocks=1 00:06:22.540 00:06:22.540 ' 00:06:22.540 15:15:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.540 15:15:28 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.540 15:15:28 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.540 ************************************ 00:06:22.540 START TEST env_memory 00:06:22.540 ************************************ 00:06:22.540 15:15:28 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:22.540 00:06:22.540 00:06:22.540 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.540 http://cunit.sourceforge.net/ 00:06:22.540 00:06:22.540 00:06:22.540 Suite: memory 00:06:22.540 Test: alloc and free memory map ...[2024-11-10 15:15:28.811358] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:22.540 passed 00:06:22.540 Test: mem map translation ...[2024-11-10 15:15:28.852834] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:22.540 [2024-11-10 15:15:28.852916] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:22.540 [2024-11-10 15:15:28.853003] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:22.540 [2024-11-10 15:15:28.853050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:22.800 passed 00:06:22.800 Test: mem map registration ...[2024-11-10 15:15:28.917681] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:22.800 [2024-11-10 15:15:28.917759] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:22.800 passed 00:06:22.800 Test: mem map adjacent registrations ...passed 00:06:22.800 00:06:22.800 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.800 suites 1 1 n/a 0 0 00:06:22.800 tests 4 4 4 0 0 00:06:22.800 asserts 152 152 152 0 n/a 00:06:22.800 00:06:22.800 Elapsed time = 0.226 seconds 00:06:22.800 00:06:22.800 real 0m0.272s 00:06:22.800 user 0m0.236s 00:06:22.800 sys 0m0.026s 00:06:22.800 15:15:29 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.800 15:15:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:22.800 ************************************ 00:06:22.800 END TEST env_memory 00:06:22.800 ************************************ 00:06:22.800 15:15:29 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:22.800 15:15:29 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.800 15:15:29 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.800 15:15:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.800 ************************************ 00:06:22.800 START TEST env_vtophys 00:06:22.800 ************************************ 00:06:22.800 15:15:29 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:22.800 EAL: lib.eal log level changed from notice to debug 00:06:22.800 EAL: Detected lcore 0 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 1 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 2 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 3 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 4 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 5 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 6 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 7 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 8 as core 0 on socket 0 00:06:22.800 EAL: Detected lcore 9 as core 0 on socket 0 00:06:22.800 EAL: Maximum logical cores by configuration: 128 00:06:22.800 EAL: Detected CPU lcores: 10 00:06:22.800 EAL: Detected NUMA nodes: 1 00:06:22.800 EAL: Checking presence of .so 'librte_eal.so.25.0' 00:06:22.800 EAL: Detected shared linkage of DPDK 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25.0 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25.0 00:06:22.800 EAL: Registered [vdev] bus. 00:06:22.800 EAL: bus.vdev log level changed from disabled to notice 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25.0 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25.0 00:06:22.800 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:22.800 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so 00:06:22.800 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so 00:06:23.060 EAL: No shared files mode enabled, IPC will be disabled 00:06:23.060 EAL: No shared files mode enabled, IPC is disabled 00:06:23.060 EAL: Selected IOVA mode 'PA' 00:06:23.060 EAL: Probing VFIO support... 00:06:23.060 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:23.060 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:23.060 EAL: Ask a virtual area of 0x2e000 bytes 00:06:23.060 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:23.060 EAL: Setting up physically contiguous memory... 00:06:23.060 EAL: Setting maximum number of open files to 524288 00:06:23.060 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:23.060 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:23.060 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.060 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:23.060 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.060 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.060 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:23.060 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:23.060 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.060 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:23.060 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.060 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.060 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:23.060 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:23.060 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.060 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:23.060 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.060 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.060 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:23.060 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:23.060 EAL: Ask a virtual area of 0x61000 bytes 00:06:23.060 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:23.060 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:23.060 EAL: Ask a virtual area of 0x400000000 bytes 00:06:23.060 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:23.060 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:23.060 EAL: Hugepages will be freed exactly as allocated. 00:06:23.060 EAL: No shared files mode enabled, IPC is disabled 00:06:23.060 EAL: No shared files mode enabled, IPC is disabled 00:06:23.060 EAL: TSC frequency is ~2294600 KHz 00:06:23.060 EAL: Main lcore 0 is ready (tid=7f015fedda40;cpuset=[0]) 00:06:23.060 EAL: Trying to obtain current memory policy. 00:06:23.060 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.060 EAL: Restoring previous memory policy: 0 00:06:23.060 EAL: request: mp_malloc_sync 00:06:23.060 EAL: No shared files mode enabled, IPC is disabled 00:06:23.060 EAL: Heap on socket 0 was expanded by 2MB 00:06:23.060 EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment 00:06:23.060 EAL: No shared files mode enabled, IPC is disabled 00:06:23.060 EAL: Mem event callback 'spdk:(nil)' registered 00:06:23.060 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:23.060 00:06:23.060 00:06:23.060 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.060 http://cunit.sourceforge.net/ 00:06:23.060 00:06:23.060 00:06:23.060 Suite: components_suite 00:06:23.629 Test: vtophys_malloc_test ...passed 00:06:23.629 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 4MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was shrunk by 4MB 00:06:23.629 EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 6MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was shrunk by 6MB 00:06:23.629 EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 10MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was shrunk by 10MB 00:06:23.629 EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 18MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was shrunk by 18MB 00:06:23.629 EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 34MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was shrunk by 34MB 00:06:23.629 EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 66MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was shrunk by 66MB 00:06:23.629 EAL: Trying to obtain current memory policy. 00:06:23.629 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.629 EAL: Restoring previous memory policy: 4 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.629 EAL: request: mp_malloc_sync 00:06:23.629 EAL: No shared files mode enabled, IPC is disabled 00:06:23.629 EAL: Heap on socket 0 was expanded by 130MB 00:06:23.629 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.888 EAL: request: mp_malloc_sync 00:06:23.888 EAL: No shared files mode enabled, IPC is disabled 00:06:23.888 EAL: Heap on socket 0 was shrunk by 130MB 00:06:23.888 EAL: Trying to obtain current memory policy. 00:06:23.888 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.888 EAL: Restoring previous memory policy: 4 00:06:23.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.888 EAL: request: mp_malloc_sync 00:06:23.888 EAL: No shared files mode enabled, IPC is disabled 00:06:23.888 EAL: Heap on socket 0 was expanded by 258MB 00:06:23.888 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.147 EAL: request: mp_malloc_sync 00:06:24.147 EAL: No shared files mode enabled, IPC is disabled 00:06:24.147 EAL: Heap on socket 0 was shrunk by 258MB 00:06:24.147 EAL: Trying to obtain current memory policy. 00:06:24.147 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.147 EAL: Restoring previous memory policy: 4 00:06:24.147 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.147 EAL: request: mp_malloc_sync 00:06:24.147 EAL: No shared files mode enabled, IPC is disabled 00:06:24.147 EAL: Heap on socket 0 was expanded by 514MB 00:06:24.407 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.407 EAL: request: mp_malloc_sync 00:06:24.407 EAL: No shared files mode enabled, IPC is disabled 00:06:24.407 EAL: Heap on socket 0 was shrunk by 514MB 00:06:24.407 EAL: Trying to obtain current memory policy. 00:06:24.407 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:24.976 EAL: Restoring previous memory policy: 4 00:06:24.976 EAL: Calling mem event callback 'spdk:(nil)' 00:06:24.976 EAL: request: mp_malloc_sync 00:06:24.976 EAL: No shared files mode enabled, IPC is disabled 00:06:24.976 EAL: Heap on socket 0 was expanded by 1026MB 00:06:25.236 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.495 passed 00:06:25.495 00:06:25.495 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.495 suites 1 1 n/a 0 0 00:06:25.495 tests 2 2 2 0 0 00:06:25.495 asserts 5274 5274 5274 0 n/a 00:06:25.495 00:06:25.495 Elapsed time = 2.387 seconds 00:06:25.495 EAL: request: mp_malloc_sync 00:06:25.495 EAL: No shared files mode enabled, IPC is disabled 00:06:25.495 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:25.495 EAL: Calling mem event callback 'spdk:(nil)' 00:06:25.495 EAL: request: mp_malloc_sync 00:06:25.495 EAL: No shared files mode enabled, IPC is disabled 00:06:25.495 EAL: Heap on socket 0 was shrunk by 2MB 00:06:25.495 EAL: No shared files mode enabled, IPC is disabled 00:06:25.495 EAL: No shared files mode enabled, IPC is disabled 00:06:25.495 EAL: No shared files mode enabled, IPC is disabled 00:06:25.495 00:06:25.495 real 0m2.662s 00:06:25.495 user 0m1.384s 00:06:25.495 sys 0m1.138s 00:06:25.495 15:15:31 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.495 15:15:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:25.495 ************************************ 00:06:25.495 END TEST env_vtophys 00:06:25.495 ************************************ 00:06:25.495 15:15:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:25.495 15:15:31 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:25.495 15:15:31 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.495 15:15:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.495 ************************************ 00:06:25.495 START TEST env_pci 00:06:25.495 ************************************ 00:06:25.496 15:15:31 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:25.755 00:06:25.755 00:06:25.755 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.755 http://cunit.sourceforge.net/ 00:06:25.755 00:06:25.755 00:06:25.755 Suite: pci 00:06:25.755 Test: pci_hook ...[2024-11-10 15:15:31.863151] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 70291 has claimed it 00:06:25.755 EAL: Cannot find device (10000:00:01.0) 00:06:25.755 EAL: Failed to attach device on primary process 00:06:25.755 passed 00:06:25.755 00:06:25.755 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.755 suites 1 1 n/a 0 0 00:06:25.755 tests 1 1 1 0 0 00:06:25.755 asserts 25 25 25 0 n/a 00:06:25.755 00:06:25.755 Elapsed time = 0.007 seconds 00:06:25.755 00:06:25.755 real 0m0.114s 00:06:25.755 user 0m0.037s 00:06:25.755 sys 0m0.076s 00:06:25.755 15:15:31 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:25.755 15:15:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:25.755 ************************************ 00:06:25.755 END TEST env_pci 00:06:25.755 ************************************ 00:06:25.755 15:15:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:25.755 15:15:31 env -- env/env.sh@15 -- # uname 00:06:25.755 15:15:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:25.755 15:15:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:25.755 15:15:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.755 15:15:32 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:25.755 15:15:32 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:25.755 15:15:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.755 ************************************ 00:06:25.755 START TEST env_dpdk_post_init 00:06:25.755 ************************************ 00:06:25.755 15:15:32 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:25.755 EAL: Detected CPU lcores: 10 00:06:25.755 EAL: Detected NUMA nodes: 1 00:06:25.755 EAL: Detected shared linkage of DPDK 00:06:25.755 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.756 EAL: Selected IOVA mode 'PA' 00:06:26.015 Starting DPDK initialization... 00:06:26.015 Starting SPDK post initialization... 00:06:26.015 SPDK NVMe probe 00:06:26.015 Attaching to 0000:00:10.0 00:06:26.015 Attaching to 0000:00:11.0 00:06:26.015 Attached to 0000:00:10.0 00:06:26.015 Attached to 0000:00:11.0 00:06:26.015 Cleaning up... 00:06:26.015 00:06:26.015 real 0m0.275s 00:06:26.015 user 0m0.089s 00:06:26.015 sys 0m0.087s 00:06:26.015 15:15:32 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.015 ************************************ 00:06:26.015 END TEST env_dpdk_post_init 00:06:26.015 ************************************ 00:06:26.015 15:15:32 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:26.015 15:15:32 env -- env/env.sh@26 -- # uname 00:06:26.015 15:15:32 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:26.015 15:15:32 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:26.015 15:15:32 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.015 15:15:32 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.015 15:15:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.015 ************************************ 00:06:26.015 START TEST env_mem_callbacks 00:06:26.015 ************************************ 00:06:26.015 15:15:32 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:26.275 EAL: Detected CPU lcores: 10 00:06:26.275 EAL: Detected NUMA nodes: 1 00:06:26.275 EAL: Detected shared linkage of DPDK 00:06:26.275 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:26.275 EAL: Selected IOVA mode 'PA' 00:06:26.275 00:06:26.275 00:06:26.275 CUnit - A unit testing framework for C - Version 2.1-3 00:06:26.275 http://cunit.sourceforge.net/ 00:06:26.275 00:06:26.275 00:06:26.275 Suite: memory 00:06:26.275 Test: test ... 00:06:26.275 register 0x200000200000 2097152 00:06:26.275 malloc 3145728 00:06:26.275 register 0x200000400000 4194304 00:06:26.275 buf 0x200000500000 len 3145728 PASSED 00:06:26.275 malloc 64 00:06:26.275 buf 0x2000004fff40 len 64 PASSED 00:06:26.275 malloc 4194304 00:06:26.275 register 0x200000800000 6291456 00:06:26.275 buf 0x200000a00000 len 4194304 PASSED 00:06:26.275 free 0x200000500000 3145728 00:06:26.275 free 0x2000004fff40 64 00:06:26.275 unregister 0x200000400000 4194304 PASSED 00:06:26.275 free 0x200000a00000 4194304 00:06:26.275 unregister 0x200000800000 6291456 PASSED 00:06:26.275 malloc 8388608 00:06:26.275 register 0x200000400000 10485760 00:06:26.275 buf 0x200000600000 len 8388608 PASSED 00:06:26.275 free 0x200000600000 8388608 00:06:26.275 unregister 0x200000400000 10485760 PASSED 00:06:26.275 passed 00:06:26.275 00:06:26.275 Run Summary: Type Total Ran Passed Failed Inactive 00:06:26.275 suites 1 1 n/a 0 0 00:06:26.275 tests 1 1 1 0 0 00:06:26.275 asserts 15 15 15 0 n/a 00:06:26.275 00:06:26.275 Elapsed time = 0.013 seconds 00:06:26.275 00:06:26.275 real 0m0.217s 00:06:26.275 user 0m0.037s 00:06:26.275 sys 0m0.078s 00:06:26.275 15:15:32 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.275 15:15:32 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:26.275 ************************************ 00:06:26.275 END TEST env_mem_callbacks 00:06:26.275 ************************************ 00:06:26.535 00:06:26.535 real 0m4.132s 00:06:26.535 user 0m2.008s 00:06:26.535 sys 0m1.789s 00:06:26.535 15:15:32 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.535 15:15:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:26.535 ************************************ 00:06:26.535 END TEST env 00:06:26.535 ************************************ 00:06:26.535 15:15:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:26.535 15:15:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.535 15:15:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.535 15:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.535 ************************************ 00:06:26.535 START TEST rpc 00:06:26.535 ************************************ 00:06:26.535 15:15:32 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:26.535 * Looking for test storage... 00:06:26.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:26.535 15:15:32 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.535 15:15:32 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.535 15:15:32 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.795 15:15:32 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.795 15:15:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.795 15:15:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.795 15:15:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.795 15:15:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.795 15:15:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.795 15:15:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:26.795 15:15:32 rpc -- scripts/common.sh@345 -- # : 1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.795 15:15:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.795 15:15:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@353 -- # local d=1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.795 15:15:32 rpc -- scripts/common.sh@355 -- # echo 1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.795 15:15:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@353 -- # local d=2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.795 15:15:32 rpc -- scripts/common.sh@355 -- # echo 2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.795 15:15:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.795 15:15:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.795 15:15:32 rpc -- scripts/common.sh@368 -- # return 0 00:06:26.795 15:15:32 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.795 15:15:32 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.795 --rc genhtml_branch_coverage=1 00:06:26.795 --rc genhtml_function_coverage=1 00:06:26.795 --rc genhtml_legend=1 00:06:26.795 --rc geninfo_all_blocks=1 00:06:26.795 --rc geninfo_unexecuted_blocks=1 00:06:26.795 00:06:26.795 ' 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.796 --rc genhtml_branch_coverage=1 00:06:26.796 --rc genhtml_function_coverage=1 00:06:26.796 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.796 --rc genhtml_branch_coverage=1 00:06:26.796 --rc genhtml_function_coverage=1 00:06:26.796 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.796 --rc genhtml_branch_coverage=1 00:06:26.796 --rc genhtml_function_coverage=1 00:06:26.796 --rc genhtml_legend=1 00:06:26.796 --rc geninfo_all_blocks=1 00:06:26.796 --rc geninfo_unexecuted_blocks=1 00:06:26.796 00:06:26.796 ' 00:06:26.796 15:15:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=70418 00:06:26.796 15:15:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:26.796 15:15:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.796 15:15:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 70418 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@833 -- # '[' -z 70418 ']' 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.796 15:15:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.796 [2024-11-10 15:15:33.046429] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:26.796 [2024-11-10 15:15:33.046618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70418 ] 00:06:27.056 [2024-11-10 15:15:33.183773] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.056 [2024-11-10 15:15:33.224296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.056 [2024-11-10 15:15:33.266986] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:27.056 [2024-11-10 15:15:33.267194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 70418' to capture a snapshot of events at runtime. 00:06:27.056 [2024-11-10 15:15:33.267237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.056 [2024-11-10 15:15:33.267270] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.056 [2024-11-10 15:15:33.267290] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid70418 for offline analysis/debug. 00:06:27.056 [2024-11-10 15:15:33.267775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.626 15:15:33 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.626 15:15:33 rpc -- common/autotest_common.sh@866 -- # return 0 00:06:27.626 15:15:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.626 15:15:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:27.626 15:15:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:27.626 15:15:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:27.626 15:15:33 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.626 15:15:33 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.626 15:15:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.626 ************************************ 00:06:27.626 START TEST rpc_integrity 00:06:27.626 ************************************ 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:27.626 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.626 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.886 15:15:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.886 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:27.886 { 00:06:27.886 "name": "Malloc0", 00:06:27.886 "aliases": [ 00:06:27.886 "56e36a4e-2528-4490-a061-fb34aad56fe5" 00:06:27.886 ], 00:06:27.886 "product_name": "Malloc disk", 00:06:27.886 "block_size": 512, 00:06:27.886 "num_blocks": 16384, 00:06:27.886 "uuid": "56e36a4e-2528-4490-a061-fb34aad56fe5", 00:06:27.886 "assigned_rate_limits": { 00:06:27.886 "rw_ios_per_sec": 0, 00:06:27.886 "rw_mbytes_per_sec": 0, 00:06:27.886 "r_mbytes_per_sec": 0, 00:06:27.886 "w_mbytes_per_sec": 0 00:06:27.886 }, 00:06:27.886 "claimed": false, 00:06:27.886 "zoned": false, 00:06:27.886 "supported_io_types": { 00:06:27.886 "read": true, 00:06:27.886 "write": true, 00:06:27.886 "unmap": true, 00:06:27.886 "flush": true, 00:06:27.886 "reset": true, 00:06:27.886 "nvme_admin": false, 00:06:27.886 "nvme_io": false, 00:06:27.886 "nvme_io_md": false, 00:06:27.886 "write_zeroes": true, 00:06:27.886 "zcopy": true, 00:06:27.886 "get_zone_info": false, 00:06:27.886 "zone_management": false, 00:06:27.886 "zone_append": false, 00:06:27.886 "compare": false, 00:06:27.886 "compare_and_write": false, 00:06:27.886 "abort": true, 00:06:27.886 "seek_hole": false, 00:06:27.886 "seek_data": false, 00:06:27.886 "copy": true, 00:06:27.886 "nvme_iov_md": false 00:06:27.886 }, 00:06:27.886 "memory_domains": [ 00:06:27.886 { 00:06:27.886 "dma_device_id": "system", 00:06:27.886 "dma_device_type": 1 00:06:27.886 }, 00:06:27.886 { 00:06:27.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.886 "dma_device_type": 2 00:06:27.886 } 00:06:27.886 ], 00:06:27.886 "driver_specific": {} 00:06:27.886 } 00:06:27.886 ]' 00:06:27.886 15:15:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:27.886 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:27.886 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:27.886 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.886 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.886 [2024-11-10 15:15:34.047561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:27.886 [2024-11-10 15:15:34.047631] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:27.886 [2024-11-10 15:15:34.047655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:27.886 [2024-11-10 15:15:34.047675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:27.886 [2024-11-10 15:15:34.050299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:27.886 [2024-11-10 15:15:34.050412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:27.886 Passthru0 00:06:27.886 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.886 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:27.886 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.886 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.886 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.886 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:27.886 { 00:06:27.886 "name": "Malloc0", 00:06:27.886 "aliases": [ 00:06:27.886 "56e36a4e-2528-4490-a061-fb34aad56fe5" 00:06:27.886 ], 00:06:27.886 "product_name": "Malloc disk", 00:06:27.886 "block_size": 512, 00:06:27.886 "num_blocks": 16384, 00:06:27.886 "uuid": "56e36a4e-2528-4490-a061-fb34aad56fe5", 00:06:27.886 "assigned_rate_limits": { 00:06:27.886 "rw_ios_per_sec": 0, 00:06:27.886 "rw_mbytes_per_sec": 0, 00:06:27.886 "r_mbytes_per_sec": 0, 00:06:27.886 "w_mbytes_per_sec": 0 00:06:27.886 }, 00:06:27.886 "claimed": true, 00:06:27.886 "claim_type": "exclusive_write", 00:06:27.886 "zoned": false, 00:06:27.886 "supported_io_types": { 00:06:27.886 "read": true, 00:06:27.886 "write": true, 00:06:27.886 "unmap": true, 00:06:27.886 "flush": true, 00:06:27.886 "reset": true, 00:06:27.886 "nvme_admin": false, 00:06:27.886 "nvme_io": false, 00:06:27.886 "nvme_io_md": false, 00:06:27.886 "write_zeroes": true, 00:06:27.886 "zcopy": true, 00:06:27.886 "get_zone_info": false, 00:06:27.886 "zone_management": false, 00:06:27.886 "zone_append": false, 00:06:27.886 "compare": false, 00:06:27.886 "compare_and_write": false, 00:06:27.886 "abort": true, 00:06:27.886 "seek_hole": false, 00:06:27.886 "seek_data": false, 00:06:27.886 "copy": true, 00:06:27.886 "nvme_iov_md": false 00:06:27.886 }, 00:06:27.886 "memory_domains": [ 00:06:27.886 { 00:06:27.886 "dma_device_id": "system", 00:06:27.886 "dma_device_type": 1 00:06:27.886 }, 00:06:27.886 { 00:06:27.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.886 "dma_device_type": 2 00:06:27.886 } 00:06:27.886 ], 00:06:27.886 "driver_specific": {} 00:06:27.886 }, 00:06:27.886 { 00:06:27.886 "name": "Passthru0", 00:06:27.886 "aliases": [ 00:06:27.886 "ab62b81e-1a15-5c76-90f5-de780d328e91" 00:06:27.886 ], 00:06:27.886 "product_name": "passthru", 00:06:27.886 "block_size": 512, 00:06:27.886 "num_blocks": 16384, 00:06:27.886 "uuid": "ab62b81e-1a15-5c76-90f5-de780d328e91", 00:06:27.886 "assigned_rate_limits": { 00:06:27.886 "rw_ios_per_sec": 0, 00:06:27.887 "rw_mbytes_per_sec": 0, 00:06:27.887 "r_mbytes_per_sec": 0, 00:06:27.887 "w_mbytes_per_sec": 0 00:06:27.887 }, 00:06:27.887 "claimed": false, 00:06:27.887 "zoned": false, 00:06:27.887 "supported_io_types": { 00:06:27.887 "read": true, 00:06:27.887 "write": true, 00:06:27.887 "unmap": true, 00:06:27.887 "flush": true, 00:06:27.887 "reset": true, 00:06:27.887 "nvme_admin": false, 00:06:27.887 "nvme_io": false, 00:06:27.887 "nvme_io_md": false, 00:06:27.887 "write_zeroes": true, 00:06:27.887 "zcopy": true, 00:06:27.887 "get_zone_info": false, 00:06:27.887 "zone_management": false, 00:06:27.887 "zone_append": false, 00:06:27.887 "compare": false, 00:06:27.887 "compare_and_write": false, 00:06:27.887 "abort": true, 00:06:27.887 "seek_hole": false, 00:06:27.887 "seek_data": false, 00:06:27.887 "copy": true, 00:06:27.887 "nvme_iov_md": false 00:06:27.887 }, 00:06:27.887 "memory_domains": [ 00:06:27.887 { 00:06:27.887 "dma_device_id": "system", 00:06:27.887 "dma_device_type": 1 00:06:27.887 }, 00:06:27.887 { 00:06:27.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.887 "dma_device_type": 2 00:06:27.887 } 00:06:27.887 ], 00:06:27.887 "driver_specific": { 00:06:27.887 "passthru": { 00:06:27.887 "name": "Passthru0", 00:06:27.887 "base_bdev_name": "Malloc0" 00:06:27.887 } 00:06:27.887 } 00:06:27.887 } 00:06:27.887 ]' 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.887 ************************************ 00:06:27.887 END TEST rpc_integrity 00:06:27.887 ************************************ 00:06:27.887 15:15:34 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.887 00:06:27.887 real 0m0.342s 00:06:27.887 user 0m0.197s 00:06:27.887 sys 0m0.061s 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.887 15:15:34 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 15:15:34 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:28.147 15:15:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.147 15:15:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.147 15:15:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 ************************************ 00:06:28.147 START TEST rpc_plugins 00:06:28.147 ************************************ 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:28.147 { 00:06:28.147 "name": "Malloc1", 00:06:28.147 "aliases": [ 00:06:28.147 "6050528e-b55d-49da-a1d6-5f987084e542" 00:06:28.147 ], 00:06:28.147 "product_name": "Malloc disk", 00:06:28.147 "block_size": 4096, 00:06:28.147 "num_blocks": 256, 00:06:28.147 "uuid": "6050528e-b55d-49da-a1d6-5f987084e542", 00:06:28.147 "assigned_rate_limits": { 00:06:28.147 "rw_ios_per_sec": 0, 00:06:28.147 "rw_mbytes_per_sec": 0, 00:06:28.147 "r_mbytes_per_sec": 0, 00:06:28.147 "w_mbytes_per_sec": 0 00:06:28.147 }, 00:06:28.147 "claimed": false, 00:06:28.147 "zoned": false, 00:06:28.147 "supported_io_types": { 00:06:28.147 "read": true, 00:06:28.147 "write": true, 00:06:28.147 "unmap": true, 00:06:28.147 "flush": true, 00:06:28.147 "reset": true, 00:06:28.147 "nvme_admin": false, 00:06:28.147 "nvme_io": false, 00:06:28.147 "nvme_io_md": false, 00:06:28.147 "write_zeroes": true, 00:06:28.147 "zcopy": true, 00:06:28.147 "get_zone_info": false, 00:06:28.147 "zone_management": false, 00:06:28.147 "zone_append": false, 00:06:28.147 "compare": false, 00:06:28.147 "compare_and_write": false, 00:06:28.147 "abort": true, 00:06:28.147 "seek_hole": false, 00:06:28.147 "seek_data": false, 00:06:28.147 "copy": true, 00:06:28.147 "nvme_iov_md": false 00:06:28.147 }, 00:06:28.147 "memory_domains": [ 00:06:28.147 { 00:06:28.147 "dma_device_id": "system", 00:06:28.147 "dma_device_type": 1 00:06:28.147 }, 00:06:28.147 { 00:06:28.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.147 "dma_device_type": 2 00:06:28.147 } 00:06:28.147 ], 00:06:28.147 "driver_specific": {} 00:06:28.147 } 00:06:28.147 ]' 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:28.147 15:15:34 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:28.147 00:06:28.147 real 0m0.166s 00:06:28.147 user 0m0.095s 00:06:28.147 sys 0m0.032s 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.147 ************************************ 00:06:28.147 END TEST rpc_plugins 00:06:28.147 ************************************ 00:06:28.147 15:15:34 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:28.407 15:15:34 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:28.407 15:15:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.407 15:15:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.407 15:15:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.407 ************************************ 00:06:28.407 START TEST rpc_trace_cmd_test 00:06:28.407 ************************************ 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:28.407 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid70418", 00:06:28.407 "tpoint_group_mask": "0x8", 00:06:28.407 "iscsi_conn": { 00:06:28.407 "mask": "0x2", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "scsi": { 00:06:28.407 "mask": "0x4", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "bdev": { 00:06:28.407 "mask": "0x8", 00:06:28.407 "tpoint_mask": "0xffffffffffffffff" 00:06:28.407 }, 00:06:28.407 "nvmf_rdma": { 00:06:28.407 "mask": "0x10", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "nvmf_tcp": { 00:06:28.407 "mask": "0x20", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "ftl": { 00:06:28.407 "mask": "0x40", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "blobfs": { 00:06:28.407 "mask": "0x80", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "dsa": { 00:06:28.407 "mask": "0x200", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "thread": { 00:06:28.407 "mask": "0x400", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "nvme_pcie": { 00:06:28.407 "mask": "0x800", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "iaa": { 00:06:28.407 "mask": "0x1000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "nvme_tcp": { 00:06:28.407 "mask": "0x2000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "bdev_nvme": { 00:06:28.407 "mask": "0x4000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "sock": { 00:06:28.407 "mask": "0x8000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "blob": { 00:06:28.407 "mask": "0x10000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "bdev_raid": { 00:06:28.407 "mask": "0x20000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 }, 00:06:28.407 "scheduler": { 00:06:28.407 "mask": "0x40000", 00:06:28.407 "tpoint_mask": "0x0" 00:06:28.407 } 00:06:28.407 }' 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:28.407 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:28.408 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:28.408 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:28.408 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:28.408 ************************************ 00:06:28.408 END TEST rpc_trace_cmd_test 00:06:28.408 ************************************ 00:06:28.408 15:15:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:28.408 00:06:28.408 real 0m0.202s 00:06:28.408 user 0m0.157s 00:06:28.408 sys 0m0.035s 00:06:28.408 15:15:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.408 15:15:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 15:15:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:28.668 15:15:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:28.668 15:15:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:28.668 15:15:34 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.668 15:15:34 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.668 15:15:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 ************************************ 00:06:28.668 START TEST rpc_daemon_integrity 00:06:28.668 ************************************ 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:28.668 { 00:06:28.668 "name": "Malloc2", 00:06:28.668 "aliases": [ 00:06:28.668 "4e26f1c8-a6d9-4390-a804-af6d304b7b85" 00:06:28.668 ], 00:06:28.668 "product_name": "Malloc disk", 00:06:28.668 "block_size": 512, 00:06:28.668 "num_blocks": 16384, 00:06:28.668 "uuid": "4e26f1c8-a6d9-4390-a804-af6d304b7b85", 00:06:28.668 "assigned_rate_limits": { 00:06:28.668 "rw_ios_per_sec": 0, 00:06:28.668 "rw_mbytes_per_sec": 0, 00:06:28.668 "r_mbytes_per_sec": 0, 00:06:28.668 "w_mbytes_per_sec": 0 00:06:28.668 }, 00:06:28.668 "claimed": false, 00:06:28.668 "zoned": false, 00:06:28.668 "supported_io_types": { 00:06:28.668 "read": true, 00:06:28.668 "write": true, 00:06:28.668 "unmap": true, 00:06:28.668 "flush": true, 00:06:28.668 "reset": true, 00:06:28.668 "nvme_admin": false, 00:06:28.668 "nvme_io": false, 00:06:28.668 "nvme_io_md": false, 00:06:28.668 "write_zeroes": true, 00:06:28.668 "zcopy": true, 00:06:28.668 "get_zone_info": false, 00:06:28.668 "zone_management": false, 00:06:28.668 "zone_append": false, 00:06:28.668 "compare": false, 00:06:28.668 "compare_and_write": false, 00:06:28.668 "abort": true, 00:06:28.668 "seek_hole": false, 00:06:28.668 "seek_data": false, 00:06:28.668 "copy": true, 00:06:28.668 "nvme_iov_md": false 00:06:28.668 }, 00:06:28.668 "memory_domains": [ 00:06:28.668 { 00:06:28.668 "dma_device_id": "system", 00:06:28.668 "dma_device_type": 1 00:06:28.668 }, 00:06:28.668 { 00:06:28.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.668 "dma_device_type": 2 00:06:28.668 } 00:06:28.668 ], 00:06:28.668 "driver_specific": {} 00:06:28.668 } 00:06:28.668 ]' 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 [2024-11-10 15:15:34.956118] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:28.668 [2024-11-10 15:15:34.956179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:28.668 [2024-11-10 15:15:34.956201] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:06:28.668 [2024-11-10 15:15:34.956213] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:28.668 [2024-11-10 15:15:34.958687] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:28.668 [2024-11-10 15:15:34.958728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:28.668 Passthru0 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.668 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:28.668 { 00:06:28.668 "name": "Malloc2", 00:06:28.668 "aliases": [ 00:06:28.668 "4e26f1c8-a6d9-4390-a804-af6d304b7b85" 00:06:28.668 ], 00:06:28.668 "product_name": "Malloc disk", 00:06:28.668 "block_size": 512, 00:06:28.668 "num_blocks": 16384, 00:06:28.668 "uuid": "4e26f1c8-a6d9-4390-a804-af6d304b7b85", 00:06:28.668 "assigned_rate_limits": { 00:06:28.668 "rw_ios_per_sec": 0, 00:06:28.668 "rw_mbytes_per_sec": 0, 00:06:28.668 "r_mbytes_per_sec": 0, 00:06:28.668 "w_mbytes_per_sec": 0 00:06:28.668 }, 00:06:28.668 "claimed": true, 00:06:28.668 "claim_type": "exclusive_write", 00:06:28.668 "zoned": false, 00:06:28.668 "supported_io_types": { 00:06:28.668 "read": true, 00:06:28.668 "write": true, 00:06:28.668 "unmap": true, 00:06:28.668 "flush": true, 00:06:28.668 "reset": true, 00:06:28.668 "nvme_admin": false, 00:06:28.668 "nvme_io": false, 00:06:28.668 "nvme_io_md": false, 00:06:28.668 "write_zeroes": true, 00:06:28.668 "zcopy": true, 00:06:28.668 "get_zone_info": false, 00:06:28.668 "zone_management": false, 00:06:28.668 "zone_append": false, 00:06:28.668 "compare": false, 00:06:28.668 "compare_and_write": false, 00:06:28.668 "abort": true, 00:06:28.668 "seek_hole": false, 00:06:28.668 "seek_data": false, 00:06:28.668 "copy": true, 00:06:28.668 "nvme_iov_md": false 00:06:28.668 }, 00:06:28.668 "memory_domains": [ 00:06:28.668 { 00:06:28.668 "dma_device_id": "system", 00:06:28.668 "dma_device_type": 1 00:06:28.668 }, 00:06:28.668 { 00:06:28.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.668 "dma_device_type": 2 00:06:28.668 } 00:06:28.668 ], 00:06:28.668 "driver_specific": {} 00:06:28.668 }, 00:06:28.668 { 00:06:28.668 "name": "Passthru0", 00:06:28.668 "aliases": [ 00:06:28.668 "5429b71e-85a6-5b40-8487-7a7de713bde1" 00:06:28.668 ], 00:06:28.669 "product_name": "passthru", 00:06:28.669 "block_size": 512, 00:06:28.669 "num_blocks": 16384, 00:06:28.669 "uuid": "5429b71e-85a6-5b40-8487-7a7de713bde1", 00:06:28.669 "assigned_rate_limits": { 00:06:28.669 "rw_ios_per_sec": 0, 00:06:28.669 "rw_mbytes_per_sec": 0, 00:06:28.669 "r_mbytes_per_sec": 0, 00:06:28.669 "w_mbytes_per_sec": 0 00:06:28.669 }, 00:06:28.669 "claimed": false, 00:06:28.669 "zoned": false, 00:06:28.669 "supported_io_types": { 00:06:28.669 "read": true, 00:06:28.669 "write": true, 00:06:28.669 "unmap": true, 00:06:28.669 "flush": true, 00:06:28.669 "reset": true, 00:06:28.669 "nvme_admin": false, 00:06:28.669 "nvme_io": false, 00:06:28.669 "nvme_io_md": false, 00:06:28.669 "write_zeroes": true, 00:06:28.669 "zcopy": true, 00:06:28.669 "get_zone_info": false, 00:06:28.669 "zone_management": false, 00:06:28.669 "zone_append": false, 00:06:28.669 "compare": false, 00:06:28.669 "compare_and_write": false, 00:06:28.669 "abort": true, 00:06:28.669 "seek_hole": false, 00:06:28.669 "seek_data": false, 00:06:28.669 "copy": true, 00:06:28.669 "nvme_iov_md": false 00:06:28.669 }, 00:06:28.669 "memory_domains": [ 00:06:28.669 { 00:06:28.669 "dma_device_id": "system", 00:06:28.669 "dma_device_type": 1 00:06:28.669 }, 00:06:28.669 { 00:06:28.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:28.669 "dma_device_type": 2 00:06:28.669 } 00:06:28.669 ], 00:06:28.669 "driver_specific": { 00:06:28.669 "passthru": { 00:06:28.669 "name": "Passthru0", 00:06:28.669 "base_bdev_name": "Malloc2" 00:06:28.669 } 00:06:28.669 } 00:06:28.669 } 00:06:28.669 ]' 00:06:28.669 15:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:28.929 ************************************ 00:06:28.929 END TEST rpc_daemon_integrity 00:06:28.929 ************************************ 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:28.929 00:06:28.929 real 0m0.314s 00:06:28.929 user 0m0.175s 00:06:28.929 sys 0m0.065s 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:28.929 15:15:35 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:28.929 15:15:35 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:28.929 15:15:35 rpc -- rpc/rpc.sh@84 -- # killprocess 70418 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@952 -- # '[' -z 70418 ']' 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@956 -- # kill -0 70418 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@957 -- # uname 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70418 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:28.929 killing process with pid 70418 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70418' 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@971 -- # kill 70418 00:06:28.929 15:15:35 rpc -- common/autotest_common.sh@976 -- # wait 70418 00:06:29.498 00:06:29.498 real 0m3.108s 00:06:29.498 user 0m3.489s 00:06:29.498 sys 0m1.054s 00:06:29.498 15:15:35 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.498 15:15:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.498 ************************************ 00:06:29.498 END TEST rpc 00:06:29.498 ************************************ 00:06:29.758 15:15:35 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:29.758 15:15:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.758 15:15:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.758 15:15:35 -- common/autotest_common.sh@10 -- # set +x 00:06:29.758 ************************************ 00:06:29.758 START TEST skip_rpc 00:06:29.758 ************************************ 00:06:29.758 15:15:35 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:29.758 * Looking for test storage... 00:06:29.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:29.758 15:15:36 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:29.758 15:15:36 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:29.758 15:15:36 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:29.758 15:15:36 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:29.758 15:15:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:29.759 15:15:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.759 --rc genhtml_branch_coverage=1 00:06:29.759 --rc genhtml_function_coverage=1 00:06:29.759 --rc genhtml_legend=1 00:06:29.759 --rc geninfo_all_blocks=1 00:06:29.759 --rc geninfo_unexecuted_blocks=1 00:06:29.759 00:06:29.759 ' 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.759 --rc genhtml_branch_coverage=1 00:06:29.759 --rc genhtml_function_coverage=1 00:06:29.759 --rc genhtml_legend=1 00:06:29.759 --rc geninfo_all_blocks=1 00:06:29.759 --rc geninfo_unexecuted_blocks=1 00:06:29.759 00:06:29.759 ' 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.759 --rc genhtml_branch_coverage=1 00:06:29.759 --rc genhtml_function_coverage=1 00:06:29.759 --rc genhtml_legend=1 00:06:29.759 --rc geninfo_all_blocks=1 00:06:29.759 --rc geninfo_unexecuted_blocks=1 00:06:29.759 00:06:29.759 ' 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:29.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:29.759 --rc genhtml_branch_coverage=1 00:06:29.759 --rc genhtml_function_coverage=1 00:06:29.759 --rc genhtml_legend=1 00:06:29.759 --rc geninfo_all_blocks=1 00:06:29.759 --rc geninfo_unexecuted_blocks=1 00:06:29.759 00:06:29.759 ' 00:06:29.759 15:15:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:29.759 15:15:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:29.759 15:15:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.759 15:15:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.022 ************************************ 00:06:30.022 START TEST skip_rpc 00:06:30.022 ************************************ 00:06:30.022 15:15:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:06:30.022 15:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70625 00:06:30.022 15:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:30.022 15:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.022 15:15:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:30.022 [2024-11-10 15:15:36.219775] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:30.022 [2024-11-10 15:15:36.219907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70625 ] 00:06:30.022 [2024-11-10 15:15:36.352984] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.289 [2024-11-10 15:15:36.389725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.289 [2024-11-10 15:15:36.429960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70625 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 70625 ']' 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 70625 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70625 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70625' 00:06:35.571 killing process with pid 70625 00:06:35.571 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 70625 00:06:35.572 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 70625 00:06:35.572 00:06:35.572 real 0m5.683s 00:06:35.572 user 0m5.147s 00:06:35.572 sys 0m0.467s 00:06:35.572 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:35.572 15:15:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.572 ************************************ 00:06:35.572 END TEST skip_rpc 00:06:35.572 ************************************ 00:06:35.572 15:15:41 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:35.572 15:15:41 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:35.572 15:15:41 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:35.572 15:15:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.572 ************************************ 00:06:35.572 START TEST skip_rpc_with_json 00:06:35.572 ************************************ 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70718 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70718 00:06:35.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 70718 ']' 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:35.572 15:15:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.831 [2024-11-10 15:15:41.978161] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:35.831 [2024-11-10 15:15:41.978395] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70718 ] 00:06:35.831 [2024-11-10 15:15:42.113661] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:35.831 [2024-11-10 15:15:42.152120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.091 [2024-11-10 15:15:42.193321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.661 [2024-11-10 15:15:42.816616] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:36.661 request: 00:06:36.661 { 00:06:36.661 "trtype": "tcp", 00:06:36.661 "method": "nvmf_get_transports", 00:06:36.661 "req_id": 1 00:06:36.661 } 00:06:36.661 Got JSON-RPC error response 00:06:36.661 response: 00:06:36.661 { 00:06:36.661 "code": -19, 00:06:36.661 "message": "No such device" 00:06:36.661 } 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.661 [2024-11-10 15:15:42.828771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.661 15:15:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.661 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.661 15:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.661 { 00:06:36.661 "subsystems": [ 00:06:36.661 { 00:06:36.661 "subsystem": "fsdev", 00:06:36.661 "config": [ 00:06:36.661 { 00:06:36.661 "method": "fsdev_set_opts", 00:06:36.661 "params": { 00:06:36.661 "fsdev_io_pool_size": 65535, 00:06:36.661 "fsdev_io_cache_size": 256 00:06:36.661 } 00:06:36.661 } 00:06:36.661 ] 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "subsystem": "keyring", 00:06:36.661 "config": [] 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "subsystem": "iobuf", 00:06:36.661 "config": [ 00:06:36.661 { 00:06:36.661 "method": "iobuf_set_options", 00:06:36.661 "params": { 00:06:36.661 "small_pool_count": 8192, 00:06:36.661 "large_pool_count": 1024, 00:06:36.661 "small_bufsize": 8192, 00:06:36.661 "large_bufsize": 135168, 00:06:36.661 "enable_numa": false 00:06:36.661 } 00:06:36.661 } 00:06:36.661 ] 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "subsystem": "sock", 00:06:36.661 "config": [ 00:06:36.661 { 00:06:36.661 "method": "sock_set_default_impl", 00:06:36.661 "params": { 00:06:36.661 "impl_name": "posix" 00:06:36.661 } 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "method": "sock_impl_set_options", 00:06:36.661 "params": { 00:06:36.661 "impl_name": "ssl", 00:06:36.661 "recv_buf_size": 4096, 00:06:36.661 "send_buf_size": 4096, 00:06:36.661 "enable_recv_pipe": true, 00:06:36.661 "enable_quickack": false, 00:06:36.661 "enable_placement_id": 0, 00:06:36.661 "enable_zerocopy_send_server": true, 00:06:36.661 "enable_zerocopy_send_client": false, 00:06:36.661 "zerocopy_threshold": 0, 00:06:36.661 "tls_version": 0, 00:06:36.661 "enable_ktls": false 00:06:36.661 } 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "method": "sock_impl_set_options", 00:06:36.661 "params": { 00:06:36.661 "impl_name": "posix", 00:06:36.661 "recv_buf_size": 2097152, 00:06:36.661 "send_buf_size": 2097152, 00:06:36.661 "enable_recv_pipe": true, 00:06:36.661 "enable_quickack": false, 00:06:36.661 "enable_placement_id": 0, 00:06:36.661 "enable_zerocopy_send_server": true, 00:06:36.661 "enable_zerocopy_send_client": false, 00:06:36.661 "zerocopy_threshold": 0, 00:06:36.661 "tls_version": 0, 00:06:36.661 "enable_ktls": false 00:06:36.661 } 00:06:36.661 } 00:06:36.661 ] 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "subsystem": "vmd", 00:06:36.661 "config": [] 00:06:36.661 }, 00:06:36.661 { 00:06:36.661 "subsystem": "accel", 00:06:36.661 "config": [ 00:06:36.661 { 00:06:36.661 "method": "accel_set_options", 00:06:36.661 "params": { 00:06:36.661 "small_cache_size": 128, 00:06:36.661 "large_cache_size": 16, 00:06:36.661 "task_count": 2048, 00:06:36.661 "sequence_count": 2048, 00:06:36.662 "buf_count": 2048 00:06:36.662 } 00:06:36.662 } 00:06:36.662 ] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "bdev", 00:06:36.662 "config": [ 00:06:36.662 { 00:06:36.662 "method": "bdev_set_options", 00:06:36.662 "params": { 00:06:36.662 "bdev_io_pool_size": 65535, 00:06:36.662 "bdev_io_cache_size": 256, 00:06:36.662 "bdev_auto_examine": true, 00:06:36.662 "iobuf_small_cache_size": 128, 00:06:36.662 "iobuf_large_cache_size": 16 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "bdev_raid_set_options", 00:06:36.662 "params": { 00:06:36.662 "process_window_size_kb": 1024, 00:06:36.662 "process_max_bandwidth_mb_sec": 0 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "bdev_iscsi_set_options", 00:06:36.662 "params": { 00:06:36.662 "timeout_sec": 30 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "bdev_nvme_set_options", 00:06:36.662 "params": { 00:06:36.662 "action_on_timeout": "none", 00:06:36.662 "timeout_us": 0, 00:06:36.662 "timeout_admin_us": 0, 00:06:36.662 "keep_alive_timeout_ms": 10000, 00:06:36.662 "arbitration_burst": 0, 00:06:36.662 "low_priority_weight": 0, 00:06:36.662 "medium_priority_weight": 0, 00:06:36.662 "high_priority_weight": 0, 00:06:36.662 "nvme_adminq_poll_period_us": 10000, 00:06:36.662 "nvme_ioq_poll_period_us": 0, 00:06:36.662 "io_queue_requests": 0, 00:06:36.662 "delay_cmd_submit": true, 00:06:36.662 "transport_retry_count": 4, 00:06:36.662 "bdev_retry_count": 3, 00:06:36.662 "transport_ack_timeout": 0, 00:06:36.662 "ctrlr_loss_timeout_sec": 0, 00:06:36.662 "reconnect_delay_sec": 0, 00:06:36.662 "fast_io_fail_timeout_sec": 0, 00:06:36.662 "disable_auto_failback": false, 00:06:36.662 "generate_uuids": false, 00:06:36.662 "transport_tos": 0, 00:06:36.662 "nvme_error_stat": false, 00:06:36.662 "rdma_srq_size": 0, 00:06:36.662 "io_path_stat": false, 00:06:36.662 "allow_accel_sequence": false, 00:06:36.662 "rdma_max_cq_size": 0, 00:06:36.662 "rdma_cm_event_timeout_ms": 0, 00:06:36.662 "dhchap_digests": [ 00:06:36.662 "sha256", 00:06:36.662 "sha384", 00:06:36.662 "sha512" 00:06:36.662 ], 00:06:36.662 "dhchap_dhgroups": [ 00:06:36.662 "null", 00:06:36.662 "ffdhe2048", 00:06:36.662 "ffdhe3072", 00:06:36.662 "ffdhe4096", 00:06:36.662 "ffdhe6144", 00:06:36.662 "ffdhe8192" 00:06:36.662 ] 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "bdev_nvme_set_hotplug", 00:06:36.662 "params": { 00:06:36.662 "period_us": 100000, 00:06:36.662 "enable": false 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "bdev_wait_for_examine" 00:06:36.662 } 00:06:36.662 ] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "scsi", 00:06:36.662 "config": null 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "scheduler", 00:06:36.662 "config": [ 00:06:36.662 { 00:06:36.662 "method": "framework_set_scheduler", 00:06:36.662 "params": { 00:06:36.662 "name": "static" 00:06:36.662 } 00:06:36.662 } 00:06:36.662 ] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "vhost_scsi", 00:06:36.662 "config": [] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "vhost_blk", 00:06:36.662 "config": [] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "ublk", 00:06:36.662 "config": [] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "nbd", 00:06:36.662 "config": [] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "nvmf", 00:06:36.662 "config": [ 00:06:36.662 { 00:06:36.662 "method": "nvmf_set_config", 00:06:36.662 "params": { 00:06:36.662 "discovery_filter": "match_any", 00:06:36.662 "admin_cmd_passthru": { 00:06:36.662 "identify_ctrlr": false 00:06:36.662 }, 00:06:36.662 "dhchap_digests": [ 00:06:36.662 "sha256", 00:06:36.662 "sha384", 00:06:36.662 "sha512" 00:06:36.662 ], 00:06:36.662 "dhchap_dhgroups": [ 00:06:36.662 "null", 00:06:36.662 "ffdhe2048", 00:06:36.662 "ffdhe3072", 00:06:36.662 "ffdhe4096", 00:06:36.662 "ffdhe6144", 00:06:36.662 "ffdhe8192" 00:06:36.662 ] 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "nvmf_set_max_subsystems", 00:06:36.662 "params": { 00:06:36.662 "max_subsystems": 1024 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "nvmf_set_crdt", 00:06:36.662 "params": { 00:06:36.662 "crdt1": 0, 00:06:36.662 "crdt2": 0, 00:06:36.662 "crdt3": 0 00:06:36.662 } 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "method": "nvmf_create_transport", 00:06:36.662 "params": { 00:06:36.662 "trtype": "TCP", 00:06:36.662 "max_queue_depth": 128, 00:06:36.662 "max_io_qpairs_per_ctrlr": 127, 00:06:36.662 "in_capsule_data_size": 4096, 00:06:36.662 "max_io_size": 131072, 00:06:36.662 "io_unit_size": 131072, 00:06:36.662 "max_aq_depth": 128, 00:06:36.662 "num_shared_buffers": 511, 00:06:36.662 "buf_cache_size": 4294967295, 00:06:36.662 "dif_insert_or_strip": false, 00:06:36.662 "zcopy": false, 00:06:36.662 "c2h_success": true, 00:06:36.662 "sock_priority": 0, 00:06:36.662 "abort_timeout_sec": 1, 00:06:36.662 "ack_timeout": 0, 00:06:36.662 "data_wr_pool_size": 0 00:06:36.662 } 00:06:36.662 } 00:06:36.662 ] 00:06:36.662 }, 00:06:36.662 { 00:06:36.662 "subsystem": "iscsi", 00:06:36.662 "config": [ 00:06:36.662 { 00:06:36.662 "method": "iscsi_set_options", 00:06:36.662 "params": { 00:06:36.662 "node_base": "iqn.2016-06.io.spdk", 00:06:36.662 "max_sessions": 128, 00:06:36.662 "max_connections_per_session": 2, 00:06:36.662 "max_queue_depth": 64, 00:06:36.662 "default_time2wait": 2, 00:06:36.662 "default_time2retain": 20, 00:06:36.662 "first_burst_length": 8192, 00:06:36.662 "immediate_data": true, 00:06:36.662 "allow_duplicated_isid": false, 00:06:36.662 "error_recovery_level": 0, 00:06:36.662 "nop_timeout": 60, 00:06:36.662 "nop_in_interval": 30, 00:06:36.662 "disable_chap": false, 00:06:36.662 "require_chap": false, 00:06:36.662 "mutual_chap": false, 00:06:36.662 "chap_group": 0, 00:06:36.662 "max_large_datain_per_connection": 64, 00:06:36.662 "max_r2t_per_connection": 4, 00:06:36.662 "pdu_pool_size": 36864, 00:06:36.662 "immediate_data_pool_size": 16384, 00:06:36.662 "data_out_pool_size": 2048 00:06:36.662 } 00:06:36.662 } 00:06:36.662 ] 00:06:36.662 } 00:06:36.662 ] 00:06:36.662 } 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70718 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 70718 ']' 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 70718 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:36.662 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70718 00:06:36.922 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:36.922 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:36.922 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70718' 00:06:36.922 killing process with pid 70718 00:06:36.922 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 70718 00:06:36.923 15:15:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 70718 00:06:37.492 15:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70746 00:06:37.492 15:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:37.492 15:15:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70746 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 70746 ']' 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 70746 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70746 00:06:42.773 killing process with pid 70746 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70746' 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 70746 00:06:42.773 15:15:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 70746 00:06:43.033 15:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:43.033 15:15:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:43.033 00:06:43.033 real 0m7.460s 00:06:43.033 user 0m6.726s 00:06:43.033 sys 0m1.042s 00:06:43.033 ************************************ 00:06:43.033 END TEST skip_rpc_with_json 00:06:43.033 ************************************ 00:06:43.033 15:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.033 15:15:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 15:15:49 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:43.294 15:15:49 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.294 15:15:49 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.294 15:15:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.294 ************************************ 00:06:43.294 START TEST skip_rpc_with_delay 00:06:43.294 ************************************ 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:43.294 [2024-11-10 15:15:49.520502] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.294 00:06:43.294 real 0m0.191s 00:06:43.294 user 0m0.096s 00:06:43.294 sys 0m0.093s 00:06:43.294 ************************************ 00:06:43.294 END TEST skip_rpc_with_delay 00:06:43.294 ************************************ 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.294 15:15:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:43.551 15:15:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:43.551 15:15:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:43.551 15:15:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:43.551 15:15:49 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.551 15:15:49 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.551 15:15:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.551 ************************************ 00:06:43.551 START TEST exit_on_failed_rpc_init 00:06:43.551 ************************************ 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70858 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70858 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 70858 ']' 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:43.551 15:15:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.551 [2024-11-10 15:15:49.780403] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:43.551 [2024-11-10 15:15:49.780612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:06:43.810 [2024-11-10 15:15:49.913750] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.810 [2024-11-10 15:15:49.951876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.810 [2024-11-10 15:15:49.994179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:44.379 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:44.379 [2024-11-10 15:15:50.661828] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:44.379 [2024-11-10 15:15:50.661953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70876 ] 00:06:44.639 [2024-11-10 15:15:50.795303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:44.639 [2024-11-10 15:15:50.834588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.639 [2024-11-10 15:15:50.858879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.639 [2024-11-10 15:15:50.858970] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:44.639 [2024-11-10 15:15:50.858984] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:44.639 [2024-11-10 15:15:50.858996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70858 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 70858 ']' 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 70858 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70858 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70858' 00:06:44.639 killing process with pid 70858 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 70858 00:06:44.639 15:15:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 70858 00:06:45.587 00:06:45.587 real 0m1.926s 00:06:45.587 user 0m1.888s 00:06:45.587 sys 0m0.623s 00:06:45.587 15:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.587 15:15:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:45.587 ************************************ 00:06:45.587 END TEST exit_on_failed_rpc_init 00:06:45.587 ************************************ 00:06:45.587 15:15:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:45.587 00:06:45.587 real 0m15.789s 00:06:45.587 user 0m14.073s 00:06:45.587 sys 0m2.545s 00:06:45.587 15:15:51 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.587 15:15:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.587 ************************************ 00:06:45.587 END TEST skip_rpc 00:06:45.587 ************************************ 00:06:45.587 15:15:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:45.587 15:15:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.587 15:15:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.587 15:15:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.587 ************************************ 00:06:45.587 START TEST rpc_client 00:06:45.587 ************************************ 00:06:45.587 15:15:51 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:45.587 * Looking for test storage... 00:06:45.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:45.587 15:15:51 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.587 15:15:51 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.587 15:15:51 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:45.587 15:15:51 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.587 15:15:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.862 15:15:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:45.862 15:15:51 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.862 15:15:51 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:45.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.862 --rc genhtml_branch_coverage=1 00:06:45.862 --rc genhtml_function_coverage=1 00:06:45.862 --rc genhtml_legend=1 00:06:45.862 --rc geninfo_all_blocks=1 00:06:45.862 --rc geninfo_unexecuted_blocks=1 00:06:45.862 00:06:45.862 ' 00:06:45.862 15:15:51 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:45.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.862 --rc genhtml_branch_coverage=1 00:06:45.862 --rc genhtml_function_coverage=1 00:06:45.862 --rc genhtml_legend=1 00:06:45.862 --rc geninfo_all_blocks=1 00:06:45.862 --rc geninfo_unexecuted_blocks=1 00:06:45.862 00:06:45.862 ' 00:06:45.862 15:15:51 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:45.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.862 --rc genhtml_branch_coverage=1 00:06:45.862 --rc genhtml_function_coverage=1 00:06:45.862 --rc genhtml_legend=1 00:06:45.862 --rc geninfo_all_blocks=1 00:06:45.862 --rc geninfo_unexecuted_blocks=1 00:06:45.862 00:06:45.862 ' 00:06:45.862 15:15:51 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:45.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.862 --rc genhtml_branch_coverage=1 00:06:45.862 --rc genhtml_function_coverage=1 00:06:45.862 --rc genhtml_legend=1 00:06:45.862 --rc geninfo_all_blocks=1 00:06:45.862 --rc geninfo_unexecuted_blocks=1 00:06:45.862 00:06:45.862 ' 00:06:45.862 15:15:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:45.862 OK 00:06:45.862 15:15:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:45.862 00:06:45.862 real 0m0.305s 00:06:45.862 user 0m0.167s 00:06:45.862 sys 0m0.152s 00:06:45.862 15:15:52 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.862 15:15:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:45.862 ************************************ 00:06:45.862 END TEST rpc_client 00:06:45.862 ************************************ 00:06:45.862 15:15:52 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:45.862 15:15:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.862 15:15:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.862 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.862 ************************************ 00:06:45.862 START TEST json_config 00:06:45.862 ************************************ 00:06:45.862 15:15:52 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:45.862 15:15:52 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:45.862 15:15:52 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:06:45.862 15:15:52 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:46.123 15:15:52 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.123 15:15:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.123 15:15:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.123 15:15:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.123 15:15:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.123 15:15:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.123 15:15:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:46.123 15:15:52 json_config -- scripts/common.sh@345 -- # : 1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.123 15:15:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.123 15:15:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@353 -- # local d=1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.123 15:15:52 json_config -- scripts/common.sh@355 -- # echo 1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.123 15:15:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@353 -- # local d=2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.123 15:15:52 json_config -- scripts/common.sh@355 -- # echo 2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.123 15:15:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.123 15:15:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.123 15:15:52 json_config -- scripts/common.sh@368 -- # return 0 00:06:46.123 15:15:52 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.123 15:15:52 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:46.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.123 --rc genhtml_branch_coverage=1 00:06:46.123 --rc genhtml_function_coverage=1 00:06:46.123 --rc genhtml_legend=1 00:06:46.123 --rc geninfo_all_blocks=1 00:06:46.123 --rc geninfo_unexecuted_blocks=1 00:06:46.123 00:06:46.123 ' 00:06:46.123 15:15:52 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:46.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.123 --rc genhtml_branch_coverage=1 00:06:46.123 --rc genhtml_function_coverage=1 00:06:46.123 --rc genhtml_legend=1 00:06:46.123 --rc geninfo_all_blocks=1 00:06:46.123 --rc geninfo_unexecuted_blocks=1 00:06:46.123 00:06:46.123 ' 00:06:46.123 15:15:52 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:46.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.123 --rc genhtml_branch_coverage=1 00:06:46.123 --rc genhtml_function_coverage=1 00:06:46.123 --rc genhtml_legend=1 00:06:46.123 --rc geninfo_all_blocks=1 00:06:46.123 --rc geninfo_unexecuted_blocks=1 00:06:46.123 00:06:46.123 ' 00:06:46.123 15:15:52 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:46.123 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.123 --rc genhtml_branch_coverage=1 00:06:46.123 --rc genhtml_function_coverage=1 00:06:46.123 --rc genhtml_legend=1 00:06:46.123 --rc geninfo_all_blocks=1 00:06:46.123 --rc geninfo_unexecuted_blocks=1 00:06:46.123 00:06:46.123 ' 00:06:46.123 15:15:52 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:126d9008-3427-4a83-8f0d-d857067534ac 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=126d9008-3427-4a83-8f0d-d857067534ac 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.123 15:15:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.123 15:15:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.123 15:15:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.123 15:15:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.123 15:15:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.123 15:15:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.123 15:15:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.123 15:15:52 json_config -- paths/export.sh@5 -- # export PATH 00:06:46.123 15:15:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@51 -- # : 0 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.123 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.123 15:15:52 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.123 15:15:52 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:46.123 15:15:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:46.123 15:15:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:46.123 15:15:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:46.123 15:15:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:46.123 WARNING: No tests are enabled so not running JSON configuration tests 00:06:46.124 15:15:52 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:46.124 15:15:52 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:46.124 ************************************ 00:06:46.124 END TEST json_config 00:06:46.124 ************************************ 00:06:46.124 00:06:46.124 real 0m0.233s 00:06:46.124 user 0m0.138s 00:06:46.124 sys 0m0.098s 00:06:46.124 15:15:52 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.124 15:15:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.124 15:15:52 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:46.124 15:15:52 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.124 15:15:52 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.124 15:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:46.124 ************************************ 00:06:46.124 START TEST json_config_extra_key 00:06:46.124 ************************************ 00:06:46.124 15:15:52 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:46.384 15:15:52 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:46.384 15:15:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:06:46.384 15:15:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:46.384 15:15:52 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.384 15:15:52 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:46.384 15:15:52 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.384 15:15:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:46.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.384 --rc genhtml_branch_coverage=1 00:06:46.384 --rc genhtml_function_coverage=1 00:06:46.384 --rc genhtml_legend=1 00:06:46.385 --rc geninfo_all_blocks=1 00:06:46.385 --rc geninfo_unexecuted_blocks=1 00:06:46.385 00:06:46.385 ' 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:46.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.385 --rc genhtml_branch_coverage=1 00:06:46.385 --rc genhtml_function_coverage=1 00:06:46.385 --rc genhtml_legend=1 00:06:46.385 --rc geninfo_all_blocks=1 00:06:46.385 --rc geninfo_unexecuted_blocks=1 00:06:46.385 00:06:46.385 ' 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:46.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.385 --rc genhtml_branch_coverage=1 00:06:46.385 --rc genhtml_function_coverage=1 00:06:46.385 --rc genhtml_legend=1 00:06:46.385 --rc geninfo_all_blocks=1 00:06:46.385 --rc geninfo_unexecuted_blocks=1 00:06:46.385 00:06:46.385 ' 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:46.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.385 --rc genhtml_branch_coverage=1 00:06:46.385 --rc genhtml_function_coverage=1 00:06:46.385 --rc genhtml_legend=1 00:06:46.385 --rc geninfo_all_blocks=1 00:06:46.385 --rc geninfo_unexecuted_blocks=1 00:06:46.385 00:06:46.385 ' 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:126d9008-3427-4a83-8f0d-d857067534ac 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=126d9008-3427-4a83-8f0d-d857067534ac 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.385 15:15:52 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.385 15:15:52 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.385 15:15:52 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.385 15:15:52 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.385 15:15:52 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.385 15:15:52 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.385 15:15:52 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.385 15:15:52 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:46.385 15:15:52 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.385 15:15:52 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:46.385 INFO: launching applications... 00:06:46.385 15:15:52 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=71064 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:46.385 Waiting for target to run... 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 71064 /var/tmp/spdk_tgt.sock 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 71064 ']' 00:06:46.385 15:15:52 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:46.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:46.385 15:15:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.385 [2024-11-10 15:15:52.735650] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:46.385 [2024-11-10 15:15:52.735849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71064 ] 00:06:46.956 [2024-11-10 15:15:53.071300] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:46.956 [2024-11-10 15:15:53.111698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.956 [2024-11-10 15:15:53.134961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.216 15:15:53 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:47.216 15:15:53 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:47.216 00:06:47.216 15:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:47.216 INFO: shutting down applications... 00:06:47.216 15:15:53 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 71064 ]] 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 71064 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71064 00:06:47.216 15:15:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.786 15:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.786 15:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.786 15:15:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71064 00:06:47.786 15:15:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 71064 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:48.356 15:15:54 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:48.356 SPDK target shutdown done 00:06:48.356 15:15:54 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:48.356 Success 00:06:48.356 ************************************ 00:06:48.356 END TEST json_config_extra_key 00:06:48.356 ************************************ 00:06:48.356 00:06:48.356 real 0m2.164s 00:06:48.356 user 0m1.619s 00:06:48.356 sys 0m0.495s 00:06:48.356 15:15:54 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.356 15:15:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:48.356 15:15:54 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.356 15:15:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.356 15:15:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.356 15:15:54 -- common/autotest_common.sh@10 -- # set +x 00:06:48.356 ************************************ 00:06:48.356 START TEST alias_rpc 00:06:48.356 ************************************ 00:06:48.356 15:15:54 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:48.617 * Looking for test storage... 00:06:48.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.617 15:15:54 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.617 --rc genhtml_branch_coverage=1 00:06:48.617 --rc genhtml_function_coverage=1 00:06:48.617 --rc genhtml_legend=1 00:06:48.617 --rc geninfo_all_blocks=1 00:06:48.617 --rc geninfo_unexecuted_blocks=1 00:06:48.617 00:06:48.617 ' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.617 --rc genhtml_branch_coverage=1 00:06:48.617 --rc genhtml_function_coverage=1 00:06:48.617 --rc genhtml_legend=1 00:06:48.617 --rc geninfo_all_blocks=1 00:06:48.617 --rc geninfo_unexecuted_blocks=1 00:06:48.617 00:06:48.617 ' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.617 --rc genhtml_branch_coverage=1 00:06:48.617 --rc genhtml_function_coverage=1 00:06:48.617 --rc genhtml_legend=1 00:06:48.617 --rc geninfo_all_blocks=1 00:06:48.617 --rc geninfo_unexecuted_blocks=1 00:06:48.617 00:06:48.617 ' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:48.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.617 --rc genhtml_branch_coverage=1 00:06:48.617 --rc genhtml_function_coverage=1 00:06:48.617 --rc genhtml_legend=1 00:06:48.617 --rc geninfo_all_blocks=1 00:06:48.617 --rc geninfo_unexecuted_blocks=1 00:06:48.617 00:06:48.617 ' 00:06:48.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.617 15:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.617 15:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=71144 00:06:48.617 15:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.617 15:15:54 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 71144 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 71144 ']' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:48.617 15:15:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.617 [2024-11-10 15:15:54.954135] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:48.617 [2024-11-10 15:15:54.954378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71144 ] 00:06:48.877 [2024-11-10 15:15:55.087497] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.877 [2024-11-10 15:15:55.126398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.877 [2024-11-10 15:15:55.167824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.447 15:15:55 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:49.447 15:15:55 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:49.447 15:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:49.706 15:15:55 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 71144 00:06:49.706 15:15:55 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 71144 ']' 00:06:49.706 15:15:55 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 71144 00:06:49.706 15:15:55 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:06:49.706 15:15:55 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:49.706 15:15:55 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71144 00:06:49.706 15:15:56 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:49.706 15:15:56 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:49.706 15:15:56 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71144' 00:06:49.706 killing process with pid 71144 00:06:49.706 15:15:56 alias_rpc -- common/autotest_common.sh@971 -- # kill 71144 00:06:49.706 15:15:56 alias_rpc -- common/autotest_common.sh@976 -- # wait 71144 00:06:50.276 00:06:50.276 real 0m1.981s 00:06:50.276 user 0m1.852s 00:06:50.276 sys 0m0.631s 00:06:50.276 15:15:56 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.276 15:15:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.276 ************************************ 00:06:50.276 END TEST alias_rpc 00:06:50.276 ************************************ 00:06:50.536 15:15:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:50.536 15:15:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:50.536 15:15:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:50.536 15:15:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.536 15:15:56 -- common/autotest_common.sh@10 -- # set +x 00:06:50.536 ************************************ 00:06:50.536 START TEST spdkcli_tcp 00:06:50.536 ************************************ 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:50.536 * Looking for test storage... 00:06:50.536 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.536 15:15:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.536 --rc genhtml_branch_coverage=1 00:06:50.536 --rc genhtml_function_coverage=1 00:06:50.536 --rc genhtml_legend=1 00:06:50.536 --rc geninfo_all_blocks=1 00:06:50.536 --rc geninfo_unexecuted_blocks=1 00:06:50.536 00:06:50.536 ' 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.536 --rc genhtml_branch_coverage=1 00:06:50.536 --rc genhtml_function_coverage=1 00:06:50.536 --rc genhtml_legend=1 00:06:50.536 --rc geninfo_all_blocks=1 00:06:50.536 --rc geninfo_unexecuted_blocks=1 00:06:50.536 00:06:50.536 ' 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.536 --rc genhtml_branch_coverage=1 00:06:50.536 --rc genhtml_function_coverage=1 00:06:50.536 --rc genhtml_legend=1 00:06:50.536 --rc geninfo_all_blocks=1 00:06:50.536 --rc geninfo_unexecuted_blocks=1 00:06:50.536 00:06:50.536 ' 00:06:50.536 15:15:56 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.536 --rc genhtml_branch_coverage=1 00:06:50.536 --rc genhtml_function_coverage=1 00:06:50.536 --rc genhtml_legend=1 00:06:50.536 --rc geninfo_all_blocks=1 00:06:50.536 --rc geninfo_unexecuted_blocks=1 00:06:50.536 00:06:50.536 ' 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71229 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:50.796 15:15:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71229 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 71229 ']' 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.796 15:15:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.796 [2024-11-10 15:15:57.002565] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:50.796 [2024-11-10 15:15:57.002783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:06:50.796 [2024-11-10 15:15:57.136145] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:51.055 [2024-11-10 15:15:57.174384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.055 [2024-11-10 15:15:57.215371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.055 [2024-11-10 15:15:57.215468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.625 15:15:57 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.625 15:15:57 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:06:51.625 15:15:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71246 00:06:51.625 15:15:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:51.625 15:15:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:51.885 [ 00:06:51.885 "bdev_malloc_delete", 00:06:51.885 "bdev_malloc_create", 00:06:51.885 "bdev_null_resize", 00:06:51.885 "bdev_null_delete", 00:06:51.885 "bdev_null_create", 00:06:51.885 "bdev_nvme_cuse_unregister", 00:06:51.885 "bdev_nvme_cuse_register", 00:06:51.885 "bdev_opal_new_user", 00:06:51.885 "bdev_opal_set_lock_state", 00:06:51.885 "bdev_opal_delete", 00:06:51.885 "bdev_opal_get_info", 00:06:51.885 "bdev_opal_create", 00:06:51.885 "bdev_nvme_opal_revert", 00:06:51.885 "bdev_nvme_opal_init", 00:06:51.885 "bdev_nvme_send_cmd", 00:06:51.885 "bdev_nvme_set_keys", 00:06:51.885 "bdev_nvme_get_path_iostat", 00:06:51.885 "bdev_nvme_get_mdns_discovery_info", 00:06:51.885 "bdev_nvme_stop_mdns_discovery", 00:06:51.885 "bdev_nvme_start_mdns_discovery", 00:06:51.885 "bdev_nvme_set_multipath_policy", 00:06:51.885 "bdev_nvme_set_preferred_path", 00:06:51.885 "bdev_nvme_get_io_paths", 00:06:51.885 "bdev_nvme_remove_error_injection", 00:06:51.885 "bdev_nvme_add_error_injection", 00:06:51.885 "bdev_nvme_get_discovery_info", 00:06:51.885 "bdev_nvme_stop_discovery", 00:06:51.885 "bdev_nvme_start_discovery", 00:06:51.885 "bdev_nvme_get_controller_health_info", 00:06:51.885 "bdev_nvme_disable_controller", 00:06:51.885 "bdev_nvme_enable_controller", 00:06:51.885 "bdev_nvme_reset_controller", 00:06:51.885 "bdev_nvme_get_transport_statistics", 00:06:51.885 "bdev_nvme_apply_firmware", 00:06:51.885 "bdev_nvme_detach_controller", 00:06:51.885 "bdev_nvme_get_controllers", 00:06:51.885 "bdev_nvme_attach_controller", 00:06:51.885 "bdev_nvme_set_hotplug", 00:06:51.885 "bdev_nvme_set_options", 00:06:51.885 "bdev_passthru_delete", 00:06:51.885 "bdev_passthru_create", 00:06:51.885 "bdev_lvol_set_parent_bdev", 00:06:51.885 "bdev_lvol_set_parent", 00:06:51.885 "bdev_lvol_check_shallow_copy", 00:06:51.885 "bdev_lvol_start_shallow_copy", 00:06:51.885 "bdev_lvol_grow_lvstore", 00:06:51.885 "bdev_lvol_get_lvols", 00:06:51.885 "bdev_lvol_get_lvstores", 00:06:51.885 "bdev_lvol_delete", 00:06:51.885 "bdev_lvol_set_read_only", 00:06:51.885 "bdev_lvol_resize", 00:06:51.885 "bdev_lvol_decouple_parent", 00:06:51.885 "bdev_lvol_inflate", 00:06:51.885 "bdev_lvol_rename", 00:06:51.885 "bdev_lvol_clone_bdev", 00:06:51.885 "bdev_lvol_clone", 00:06:51.885 "bdev_lvol_snapshot", 00:06:51.885 "bdev_lvol_create", 00:06:51.885 "bdev_lvol_delete_lvstore", 00:06:51.885 "bdev_lvol_rename_lvstore", 00:06:51.885 "bdev_lvol_create_lvstore", 00:06:51.885 "bdev_raid_set_options", 00:06:51.885 "bdev_raid_remove_base_bdev", 00:06:51.885 "bdev_raid_add_base_bdev", 00:06:51.885 "bdev_raid_delete", 00:06:51.885 "bdev_raid_create", 00:06:51.885 "bdev_raid_get_bdevs", 00:06:51.885 "bdev_error_inject_error", 00:06:51.885 "bdev_error_delete", 00:06:51.885 "bdev_error_create", 00:06:51.885 "bdev_split_delete", 00:06:51.885 "bdev_split_create", 00:06:51.885 "bdev_delay_delete", 00:06:51.885 "bdev_delay_create", 00:06:51.885 "bdev_delay_update_latency", 00:06:51.885 "bdev_zone_block_delete", 00:06:51.885 "bdev_zone_block_create", 00:06:51.885 "blobfs_create", 00:06:51.885 "blobfs_detect", 00:06:51.885 "blobfs_set_cache_size", 00:06:51.885 "bdev_aio_delete", 00:06:51.885 "bdev_aio_rescan", 00:06:51.885 "bdev_aio_create", 00:06:51.885 "bdev_ftl_set_property", 00:06:51.885 "bdev_ftl_get_properties", 00:06:51.885 "bdev_ftl_get_stats", 00:06:51.885 "bdev_ftl_unmap", 00:06:51.885 "bdev_ftl_unload", 00:06:51.885 "bdev_ftl_delete", 00:06:51.885 "bdev_ftl_load", 00:06:51.885 "bdev_ftl_create", 00:06:51.885 "bdev_virtio_attach_controller", 00:06:51.885 "bdev_virtio_scsi_get_devices", 00:06:51.885 "bdev_virtio_detach_controller", 00:06:51.885 "bdev_virtio_blk_set_hotplug", 00:06:51.885 "bdev_iscsi_delete", 00:06:51.885 "bdev_iscsi_create", 00:06:51.885 "bdev_iscsi_set_options", 00:06:51.885 "accel_error_inject_error", 00:06:51.885 "ioat_scan_accel_module", 00:06:51.885 "dsa_scan_accel_module", 00:06:51.885 "iaa_scan_accel_module", 00:06:51.885 "keyring_file_remove_key", 00:06:51.885 "keyring_file_add_key", 00:06:51.885 "keyring_linux_set_options", 00:06:51.885 "fsdev_aio_delete", 00:06:51.885 "fsdev_aio_create", 00:06:51.885 "iscsi_get_histogram", 00:06:51.885 "iscsi_enable_histogram", 00:06:51.885 "iscsi_set_options", 00:06:51.885 "iscsi_get_auth_groups", 00:06:51.885 "iscsi_auth_group_remove_secret", 00:06:51.885 "iscsi_auth_group_add_secret", 00:06:51.885 "iscsi_delete_auth_group", 00:06:51.885 "iscsi_create_auth_group", 00:06:51.885 "iscsi_set_discovery_auth", 00:06:51.885 "iscsi_get_options", 00:06:51.885 "iscsi_target_node_request_logout", 00:06:51.885 "iscsi_target_node_set_redirect", 00:06:51.885 "iscsi_target_node_set_auth", 00:06:51.885 "iscsi_target_node_add_lun", 00:06:51.885 "iscsi_get_stats", 00:06:51.885 "iscsi_get_connections", 00:06:51.885 "iscsi_portal_group_set_auth", 00:06:51.885 "iscsi_start_portal_group", 00:06:51.886 "iscsi_delete_portal_group", 00:06:51.886 "iscsi_create_portal_group", 00:06:51.886 "iscsi_get_portal_groups", 00:06:51.886 "iscsi_delete_target_node", 00:06:51.886 "iscsi_target_node_remove_pg_ig_maps", 00:06:51.886 "iscsi_target_node_add_pg_ig_maps", 00:06:51.886 "iscsi_create_target_node", 00:06:51.886 "iscsi_get_target_nodes", 00:06:51.886 "iscsi_delete_initiator_group", 00:06:51.886 "iscsi_initiator_group_remove_initiators", 00:06:51.886 "iscsi_initiator_group_add_initiators", 00:06:51.886 "iscsi_create_initiator_group", 00:06:51.886 "iscsi_get_initiator_groups", 00:06:51.886 "nvmf_set_crdt", 00:06:51.886 "nvmf_set_config", 00:06:51.886 "nvmf_set_max_subsystems", 00:06:51.886 "nvmf_stop_mdns_prr", 00:06:51.886 "nvmf_publish_mdns_prr", 00:06:51.886 "nvmf_subsystem_get_listeners", 00:06:51.886 "nvmf_subsystem_get_qpairs", 00:06:51.886 "nvmf_subsystem_get_controllers", 00:06:51.886 "nvmf_get_stats", 00:06:51.886 "nvmf_get_transports", 00:06:51.886 "nvmf_create_transport", 00:06:51.886 "nvmf_get_targets", 00:06:51.886 "nvmf_delete_target", 00:06:51.886 "nvmf_create_target", 00:06:51.886 "nvmf_subsystem_allow_any_host", 00:06:51.886 "nvmf_subsystem_set_keys", 00:06:51.886 "nvmf_subsystem_remove_host", 00:06:51.886 "nvmf_subsystem_add_host", 00:06:51.886 "nvmf_ns_remove_host", 00:06:51.886 "nvmf_ns_add_host", 00:06:51.886 "nvmf_subsystem_remove_ns", 00:06:51.886 "nvmf_subsystem_set_ns_ana_group", 00:06:51.886 "nvmf_subsystem_add_ns", 00:06:51.886 "nvmf_subsystem_listener_set_ana_state", 00:06:51.886 "nvmf_discovery_get_referrals", 00:06:51.886 "nvmf_discovery_remove_referral", 00:06:51.886 "nvmf_discovery_add_referral", 00:06:51.886 "nvmf_subsystem_remove_listener", 00:06:51.886 "nvmf_subsystem_add_listener", 00:06:51.886 "nvmf_delete_subsystem", 00:06:51.886 "nvmf_create_subsystem", 00:06:51.886 "nvmf_get_subsystems", 00:06:51.886 "env_dpdk_get_mem_stats", 00:06:51.886 "nbd_get_disks", 00:06:51.886 "nbd_stop_disk", 00:06:51.886 "nbd_start_disk", 00:06:51.886 "ublk_recover_disk", 00:06:51.886 "ublk_get_disks", 00:06:51.886 "ublk_stop_disk", 00:06:51.886 "ublk_start_disk", 00:06:51.886 "ublk_destroy_target", 00:06:51.886 "ublk_create_target", 00:06:51.886 "virtio_blk_create_transport", 00:06:51.886 "virtio_blk_get_transports", 00:06:51.886 "vhost_controller_set_coalescing", 00:06:51.886 "vhost_get_controllers", 00:06:51.886 "vhost_delete_controller", 00:06:51.886 "vhost_create_blk_controller", 00:06:51.886 "vhost_scsi_controller_remove_target", 00:06:51.886 "vhost_scsi_controller_add_target", 00:06:51.886 "vhost_start_scsi_controller", 00:06:51.886 "vhost_create_scsi_controller", 00:06:51.886 "thread_set_cpumask", 00:06:51.886 "scheduler_set_options", 00:06:51.886 "framework_get_governor", 00:06:51.886 "framework_get_scheduler", 00:06:51.886 "framework_set_scheduler", 00:06:51.886 "framework_get_reactors", 00:06:51.886 "thread_get_io_channels", 00:06:51.886 "thread_get_pollers", 00:06:51.886 "thread_get_stats", 00:06:51.886 "framework_monitor_context_switch", 00:06:51.886 "spdk_kill_instance", 00:06:51.886 "log_enable_timestamps", 00:06:51.886 "log_get_flags", 00:06:51.886 "log_clear_flag", 00:06:51.886 "log_set_flag", 00:06:51.886 "log_get_level", 00:06:51.886 "log_set_level", 00:06:51.886 "log_get_print_level", 00:06:51.886 "log_set_print_level", 00:06:51.886 "framework_enable_cpumask_locks", 00:06:51.886 "framework_disable_cpumask_locks", 00:06:51.886 "framework_wait_init", 00:06:51.886 "framework_start_init", 00:06:51.886 "scsi_get_devices", 00:06:51.886 "bdev_get_histogram", 00:06:51.886 "bdev_enable_histogram", 00:06:51.886 "bdev_set_qos_limit", 00:06:51.886 "bdev_set_qd_sampling_period", 00:06:51.886 "bdev_get_bdevs", 00:06:51.886 "bdev_reset_iostat", 00:06:51.886 "bdev_get_iostat", 00:06:51.886 "bdev_examine", 00:06:51.886 "bdev_wait_for_examine", 00:06:51.886 "bdev_set_options", 00:06:51.886 "accel_get_stats", 00:06:51.886 "accel_set_options", 00:06:51.886 "accel_set_driver", 00:06:51.886 "accel_crypto_key_destroy", 00:06:51.886 "accel_crypto_keys_get", 00:06:51.886 "accel_crypto_key_create", 00:06:51.886 "accel_assign_opc", 00:06:51.886 "accel_get_module_info", 00:06:51.886 "accel_get_opc_assignments", 00:06:51.886 "vmd_rescan", 00:06:51.886 "vmd_remove_device", 00:06:51.886 "vmd_enable", 00:06:51.886 "sock_get_default_impl", 00:06:51.886 "sock_set_default_impl", 00:06:51.886 "sock_impl_set_options", 00:06:51.886 "sock_impl_get_options", 00:06:51.886 "iobuf_get_stats", 00:06:51.886 "iobuf_set_options", 00:06:51.886 "keyring_get_keys", 00:06:51.886 "framework_get_pci_devices", 00:06:51.886 "framework_get_config", 00:06:51.886 "framework_get_subsystems", 00:06:51.886 "fsdev_set_opts", 00:06:51.886 "fsdev_get_opts", 00:06:51.886 "trace_get_info", 00:06:51.886 "trace_get_tpoint_group_mask", 00:06:51.886 "trace_disable_tpoint_group", 00:06:51.886 "trace_enable_tpoint_group", 00:06:51.886 "trace_clear_tpoint_mask", 00:06:51.886 "trace_set_tpoint_mask", 00:06:51.886 "notify_get_notifications", 00:06:51.886 "notify_get_types", 00:06:51.886 "spdk_get_version", 00:06:51.886 "rpc_get_methods" 00:06:51.886 ] 00:06:51.886 15:15:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:51.886 15:15:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:51.886 15:15:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71229 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 71229 ']' 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 71229 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71229 00:06:51.886 killing process with pid 71229 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71229' 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 71229 00:06:51.886 15:15:58 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 71229 00:06:52.456 ************************************ 00:06:52.456 END TEST spdkcli_tcp 00:06:52.456 ************************************ 00:06:52.456 00:06:52.456 real 0m2.045s 00:06:52.456 user 0m3.321s 00:06:52.456 sys 0m0.709s 00:06:52.456 15:15:58 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:52.456 15:15:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:52.456 15:15:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:52.456 15:15:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:52.456 15:15:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:52.456 15:15:58 -- common/autotest_common.sh@10 -- # set +x 00:06:52.456 ************************************ 00:06:52.456 START TEST dpdk_mem_utility 00:06:52.456 ************************************ 00:06:52.456 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:52.715 * Looking for test storage... 00:06:52.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:52.715 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:52.715 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:06:52.715 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:52.715 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:52.715 15:15:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.715 15:15:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.715 15:15:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.715 15:15:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.716 15:15:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:52.716 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.716 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:52.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.716 --rc genhtml_branch_coverage=1 00:06:52.716 --rc genhtml_function_coverage=1 00:06:52.716 --rc genhtml_legend=1 00:06:52.716 --rc geninfo_all_blocks=1 00:06:52.716 --rc geninfo_unexecuted_blocks=1 00:06:52.716 00:06:52.716 ' 00:06:52.716 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:52.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.716 --rc genhtml_branch_coverage=1 00:06:52.716 --rc genhtml_function_coverage=1 00:06:52.716 --rc genhtml_legend=1 00:06:52.716 --rc geninfo_all_blocks=1 00:06:52.716 --rc geninfo_unexecuted_blocks=1 00:06:52.716 00:06:52.716 ' 00:06:52.716 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:52.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.716 --rc genhtml_branch_coverage=1 00:06:52.716 --rc genhtml_function_coverage=1 00:06:52.716 --rc genhtml_legend=1 00:06:52.716 --rc geninfo_all_blocks=1 00:06:52.716 --rc geninfo_unexecuted_blocks=1 00:06:52.716 00:06:52.716 ' 00:06:52.716 15:15:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:52.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.716 --rc genhtml_branch_coverage=1 00:06:52.716 --rc genhtml_function_coverage=1 00:06:52.716 --rc genhtml_legend=1 00:06:52.716 --rc geninfo_all_blocks=1 00:06:52.716 --rc geninfo_unexecuted_blocks=1 00:06:52.716 00:06:52.716 ' 00:06:52.716 15:15:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:52.716 15:15:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:52.716 15:15:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71329 00:06:52.716 15:15:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71329 00:06:52.716 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 71329 ']' 00:06:52.716 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.716 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.716 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.716 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.716 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:52.975 [2024-11-10 15:15:59.087104] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:52.975 [2024-11-10 15:15:59.087346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71329 ] 00:06:52.975 [2024-11-10 15:15:59.220456] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:52.975 [2024-11-10 15:15:59.259683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.975 [2024-11-10 15:15:59.297921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.915 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.915 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:06:53.915 15:15:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:53.915 15:15:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:53.915 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.915 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:53.915 { 00:06:53.915 "filename": "/tmp/spdk_mem_dump.txt" 00:06:53.915 } 00:06:53.915 15:15:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.915 15:15:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:53.915 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:53.915 1 heaps totaling size 810.000000 MiB 00:06:53.915 size: 810.000000 MiB heap id: 0 00:06:53.915 end heaps---------- 00:06:53.915 9 mempools totaling size 595.772034 MiB 00:06:53.915 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:53.915 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:53.915 size: 92.545471 MiB name: bdev_io_71329 00:06:53.915 size: 50.003479 MiB name: msgpool_71329 00:06:53.915 size: 36.509338 MiB name: fsdev_io_71329 00:06:53.915 size: 21.763794 MiB name: PDU_Pool 00:06:53.915 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:53.915 size: 4.133484 MiB name: evtpool_71329 00:06:53.915 size: 0.026123 MiB name: Session_Pool 00:06:53.915 end mempools------- 00:06:53.915 6 memzones totaling size 4.142822 MiB 00:06:53.915 size: 1.000366 MiB name: RG_ring_0_71329 00:06:53.915 size: 1.000366 MiB name: RG_ring_1_71329 00:06:53.915 size: 1.000366 MiB name: RG_ring_4_71329 00:06:53.915 size: 1.000366 MiB name: RG_ring_5_71329 00:06:53.915 size: 0.125366 MiB name: RG_ring_2_71329 00:06:53.915 size: 0.015991 MiB name: RG_ring_3_71329 00:06:53.915 end memzones------- 00:06:53.915 15:15:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:53.915 heap id: 0 total size: 810.000000 MiB number of busy elements: 307 number of free elements: 15 00:06:53.915 list of free elements. size: 10.954895 MiB 00:06:53.915 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:53.915 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:53.915 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:53.915 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:53.915 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:53.915 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:53.915 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:53.915 element at address: 0x200000200000 with size: 0.858093 MiB 00:06:53.915 element at address: 0x20001a600000 with size: 0.568054 MiB 00:06:53.915 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:53.915 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:53.915 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:53.915 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:53.915 element at address: 0x200027a00000 with size: 0.396301 MiB 00:06:53.915 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:53.915 list of standard malloc elements. size: 199.126221 MiB 00:06:53.915 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:53.915 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:53.915 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:53.915 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:53.915 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:53.915 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:53.915 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:53.915 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:53.915 element at address: 0x2000002fbcc0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000003fdec0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:53.915 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:53.915 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:53.916 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:53.916 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:53.916 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:53.916 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:53.917 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a65740 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a65800 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c400 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:53.917 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:53.917 list of memzone associated elements. size: 599.918884 MiB 00:06:53.917 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:53.917 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:53.917 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:53.917 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:53.917 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:53.917 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71329_0 00:06:53.917 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:53.917 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71329_0 00:06:53.917 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:53.917 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71329_0 00:06:53.917 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:53.917 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:53.917 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:53.917 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:53.917 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:53.917 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71329_0 00:06:53.917 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:53.917 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71329 00:06:53.917 element at address: 0x2000002fbd80 with size: 1.008118 MiB 00:06:53.917 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71329 00:06:53.917 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:53.917 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:53.917 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:53.917 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:53.917 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:53.917 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:53.917 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:53.917 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:53.917 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:53.917 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71329 00:06:53.917 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:53.917 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71329 00:06:53.917 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:53.917 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71329 00:06:53.918 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:53.918 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71329 00:06:53.918 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:53.918 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71329 00:06:53.918 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:53.918 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71329 00:06:53.918 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:53.918 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:53.918 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:53.918 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:53.918 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:53.918 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:53.918 element at address: 0x2000002dbac0 with size: 0.125488 MiB 00:06:53.918 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71329 00:06:53.918 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:53.918 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71329 00:06:53.918 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:53.918 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:53.918 element at address: 0x200027a658c0 with size: 0.023743 MiB 00:06:53.918 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:53.918 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:53.918 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71329 00:06:53.918 element at address: 0x200027a6ba00 with size: 0.002441 MiB 00:06:53.918 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:53.918 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:53.918 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71329 00:06:53.918 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:53.918 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71329 00:06:53.918 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:53.918 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71329 00:06:53.918 element at address: 0x200027a6c4c0 with size: 0.000305 MiB 00:06:53.918 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:53.918 15:16:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:53.918 15:16:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71329 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 71329 ']' 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 71329 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71329 00:06:53.918 killing process with pid 71329 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71329' 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 71329 00:06:53.918 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 71329 00:06:54.487 00:06:54.487 real 0m1.920s 00:06:54.487 user 0m1.755s 00:06:54.487 sys 0m0.638s 00:06:54.487 ************************************ 00:06:54.487 END TEST dpdk_mem_utility 00:06:54.487 ************************************ 00:06:54.487 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.487 15:16:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:54.487 15:16:00 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:54.487 15:16:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:54.487 15:16:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.487 15:16:00 -- common/autotest_common.sh@10 -- # set +x 00:06:54.487 ************************************ 00:06:54.487 START TEST event 00:06:54.487 ************************************ 00:06:54.487 15:16:00 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:54.777 * Looking for test storage... 00:06:54.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1691 -- # lcov --version 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:54.777 15:16:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.777 15:16:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.777 15:16:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.777 15:16:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.777 15:16:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.777 15:16:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.777 15:16:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.777 15:16:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.777 15:16:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.777 15:16:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.777 15:16:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.777 15:16:00 event -- scripts/common.sh@344 -- # case "$op" in 00:06:54.777 15:16:00 event -- scripts/common.sh@345 -- # : 1 00:06:54.777 15:16:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.777 15:16:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.777 15:16:00 event -- scripts/common.sh@365 -- # decimal 1 00:06:54.777 15:16:00 event -- scripts/common.sh@353 -- # local d=1 00:06:54.777 15:16:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.777 15:16:00 event -- scripts/common.sh@355 -- # echo 1 00:06:54.777 15:16:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.777 15:16:00 event -- scripts/common.sh@366 -- # decimal 2 00:06:54.777 15:16:00 event -- scripts/common.sh@353 -- # local d=2 00:06:54.777 15:16:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.777 15:16:00 event -- scripts/common.sh@355 -- # echo 2 00:06:54.777 15:16:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.777 15:16:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.777 15:16:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.777 15:16:00 event -- scripts/common.sh@368 -- # return 0 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:54.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.777 --rc genhtml_branch_coverage=1 00:06:54.777 --rc genhtml_function_coverage=1 00:06:54.777 --rc genhtml_legend=1 00:06:54.777 --rc geninfo_all_blocks=1 00:06:54.777 --rc geninfo_unexecuted_blocks=1 00:06:54.777 00:06:54.777 ' 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:54.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.777 --rc genhtml_branch_coverage=1 00:06:54.777 --rc genhtml_function_coverage=1 00:06:54.777 --rc genhtml_legend=1 00:06:54.777 --rc geninfo_all_blocks=1 00:06:54.777 --rc geninfo_unexecuted_blocks=1 00:06:54.777 00:06:54.777 ' 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:54.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.777 --rc genhtml_branch_coverage=1 00:06:54.777 --rc genhtml_function_coverage=1 00:06:54.777 --rc genhtml_legend=1 00:06:54.777 --rc geninfo_all_blocks=1 00:06:54.777 --rc geninfo_unexecuted_blocks=1 00:06:54.777 00:06:54.777 ' 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:54.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.777 --rc genhtml_branch_coverage=1 00:06:54.777 --rc genhtml_function_coverage=1 00:06:54.777 --rc genhtml_legend=1 00:06:54.777 --rc geninfo_all_blocks=1 00:06:54.777 --rc geninfo_unexecuted_blocks=1 00:06:54.777 00:06:54.777 ' 00:06:54.777 15:16:00 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:54.777 15:16:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:54.777 15:16:00 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:54.777 15:16:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.777 15:16:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.777 ************************************ 00:06:54.777 START TEST event_perf 00:06:54.777 ************************************ 00:06:54.777 15:16:00 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:54.777 Running I/O for 1 seconds...[2024-11-10 15:16:01.001149] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:54.777 [2024-11-10 15:16:01.001313] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71415 ] 00:06:55.043 [2024-11-10 15:16:01.136450] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.043 [2024-11-10 15:16:01.173745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.043 [2024-11-10 15:16:01.218888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.043 [2024-11-10 15:16:01.219120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.043 [2024-11-10 15:16:01.219229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.043 Running I/O for 1 seconds...[2024-11-10 15:16:01.219342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.981 00:06:55.981 lcore 0: 105925 00:06:55.981 lcore 1: 105927 00:06:55.981 lcore 2: 105926 00:06:55.981 lcore 3: 105925 00:06:55.981 done. 00:06:55.981 00:06:55.981 real 0m1.352s 00:06:55.981 user 0m4.106s 00:06:55.981 sys 0m0.128s 00:06:55.981 15:16:02 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.981 15:16:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.981 ************************************ 00:06:55.981 END TEST event_perf 00:06:55.981 ************************************ 00:06:56.241 15:16:02 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:56.241 15:16:02 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:56.241 15:16:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:56.241 15:16:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:56.241 ************************************ 00:06:56.241 START TEST event_reactor 00:06:56.241 ************************************ 00:06:56.241 15:16:02 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:56.241 [2024-11-10 15:16:02.422798] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:56.241 [2024-11-10 15:16:02.423002] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71454 ] 00:06:56.241 [2024-11-10 15:16:02.555322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.241 [2024-11-10 15:16:02.593719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.501 [2024-11-10 15:16:02.631272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.440 test_start 00:06:57.440 oneshot 00:06:57.440 tick 100 00:06:57.440 tick 100 00:06:57.440 tick 250 00:06:57.440 tick 100 00:06:57.440 tick 100 00:06:57.440 tick 100 00:06:57.440 tick 250 00:06:57.440 tick 500 00:06:57.440 tick 100 00:06:57.440 tick 100 00:06:57.440 tick 250 00:06:57.440 tick 100 00:06:57.440 tick 100 00:06:57.440 test_end 00:06:57.440 00:06:57.440 real 0m1.345s 00:06:57.440 user 0m1.128s 00:06:57.440 sys 0m0.109s 00:06:57.440 ************************************ 00:06:57.440 END TEST event_reactor 00:06:57.440 ************************************ 00:06:57.440 15:16:03 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:57.440 15:16:03 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:57.440 15:16:03 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:57.440 15:16:03 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:57.440 15:16:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:57.440 15:16:03 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.440 ************************************ 00:06:57.440 START TEST event_reactor_perf 00:06:57.440 ************************************ 00:06:57.440 15:16:03 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:57.700 [2024-11-10 15:16:03.822059] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:57.700 [2024-11-10 15:16:03.822234] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71491 ] 00:06:57.700 [2024-11-10 15:16:03.953556] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.700 [2024-11-10 15:16:03.989204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.700 [2024-11-10 15:16:04.029334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.081 test_start 00:06:59.081 test_end 00:06:59.081 Performance: 393029 events per second 00:06:59.081 00:06:59.081 real 0m1.340s 00:06:59.082 user 0m1.137s 00:06:59.082 sys 0m0.096s 00:06:59.082 ************************************ 00:06:59.082 END TEST event_reactor_perf 00:06:59.082 ************************************ 00:06:59.082 15:16:05 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.082 15:16:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:59.082 15:16:05 event -- event/event.sh@49 -- # uname -s 00:06:59.082 15:16:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:59.082 15:16:05 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:59.082 15:16:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.082 15:16:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.082 15:16:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:59.082 ************************************ 00:06:59.082 START TEST event_scheduler 00:06:59.082 ************************************ 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:59.082 * Looking for test storage... 00:06:59.082 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.082 15:16:05 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.082 --rc genhtml_branch_coverage=1 00:06:59.082 --rc genhtml_function_coverage=1 00:06:59.082 --rc genhtml_legend=1 00:06:59.082 --rc geninfo_all_blocks=1 00:06:59.082 --rc geninfo_unexecuted_blocks=1 00:06:59.082 00:06:59.082 ' 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.082 --rc genhtml_branch_coverage=1 00:06:59.082 --rc genhtml_function_coverage=1 00:06:59.082 --rc genhtml_legend=1 00:06:59.082 --rc geninfo_all_blocks=1 00:06:59.082 --rc geninfo_unexecuted_blocks=1 00:06:59.082 00:06:59.082 ' 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.082 --rc genhtml_branch_coverage=1 00:06:59.082 --rc genhtml_function_coverage=1 00:06:59.082 --rc genhtml_legend=1 00:06:59.082 --rc geninfo_all_blocks=1 00:06:59.082 --rc geninfo_unexecuted_blocks=1 00:06:59.082 00:06:59.082 ' 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.082 --rc genhtml_branch_coverage=1 00:06:59.082 --rc genhtml_function_coverage=1 00:06:59.082 --rc genhtml_legend=1 00:06:59.082 --rc geninfo_all_blocks=1 00:06:59.082 --rc geninfo_unexecuted_blocks=1 00:06:59.082 00:06:59.082 ' 00:06:59.082 15:16:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:59.082 15:16:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71556 00:06:59.082 15:16:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:59.082 15:16:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.082 15:16:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71556 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 71556 ']' 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.082 15:16:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:59.342 [2024-11-10 15:16:05.507022] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:06:59.342 [2024-11-10 15:16:05.507252] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71556 ] 00:06:59.342 [2024-11-10 15:16:05.647357] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.342 [2024-11-10 15:16:05.679228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.601 [2024-11-10 15:16:05.708933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.601 [2024-11-10 15:16:05.708962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.601 [2024-11-10 15:16:05.709474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.601 [2024-11-10 15:16:05.709610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.171 15:16:06 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.171 15:16:06 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:07:00.171 15:16:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:00.171 15:16:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.171 15:16:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.171 POWER: acpi-cpufreq driver is not supported 00:07:00.171 POWER: intel_pstate driver is not supported 00:07:00.171 POWER: amd-pstate driver is not supported 00:07:00.171 POWER: cppc_cpufreq driver is not supported 00:07:00.171 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:07:00.171 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:07:00.172 POWER: Unable to set Power Management Environment for lcore 0 00:07:00.172 [2024-11-10 15:16:06.343652] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:07:00.172 [2024-11-10 15:16:06.343705] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:07:00.172 [2024-11-10 15:16:06.343717] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:00.172 [2024-11-10 15:16:06.343733] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:00.172 [2024-11-10 15:16:06.343742] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:00.172 [2024-11-10 15:16:06.343750] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 [2024-11-10 15:16:06.419280] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 ************************************ 00:07:00.172 START TEST scheduler_create_thread 00:07:00.172 ************************************ 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 2 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 3 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 4 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 5 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 6 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 7 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.172 8 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.172 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.432 9 00:07:00.432 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.432 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:00.432 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.432 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.690 10 00:07:00.690 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:00.690 15:16:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:00.690 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.690 15:16:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:02.071 15:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.071 15:16:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:02.071 15:16:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:02.071 15:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.071 15:16:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.010 15:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.010 15:16:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:03.010 15:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.010 15:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.577 15:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:03.577 15:16:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:03.577 15:16:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:03.577 15:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:03.577 15:16:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 ************************************ 00:07:04.514 END TEST scheduler_create_thread 00:07:04.514 ************************************ 00:07:04.514 15:16:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.514 00:07:04.514 real 0m4.220s 00:07:04.514 user 0m0.026s 00:07:04.514 sys 0m0.010s 00:07:04.514 15:16:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:04.514 15:16:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 15:16:10 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:04.514 15:16:10 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71556 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 71556 ']' 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 71556 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71556 00:07:04.514 killing process with pid 71556 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:04.514 15:16:10 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71556' 00:07:04.515 15:16:10 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 71556 00:07:04.515 15:16:10 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 71556 00:07:04.774 [2024-11-10 15:16:10.932678] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:05.033 00:07:05.033 real 0m6.007s 00:07:05.033 user 0m12.920s 00:07:05.033 sys 0m0.520s 00:07:05.033 15:16:11 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:05.033 ************************************ 00:07:05.033 END TEST event_scheduler 00:07:05.033 ************************************ 00:07:05.033 15:16:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:05.033 15:16:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:05.033 15:16:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:05.033 15:16:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:05.033 15:16:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:05.033 15:16:11 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.033 ************************************ 00:07:05.033 START TEST app_repeat 00:07:05.033 ************************************ 00:07:05.033 15:16:11 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71673 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71673' 00:07:05.033 Process app_repeat pid: 71673 00:07:05.033 15:16:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:05.034 15:16:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:05.034 spdk_app_start Round 0 00:07:05.034 15:16:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71673 /var/tmp/spdk-nbd.sock 00:07:05.034 15:16:11 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71673 ']' 00:07:05.034 15:16:11 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.034 15:16:11 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:05.034 15:16:11 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.034 15:16:11 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:05.034 15:16:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:05.034 [2024-11-10 15:16:11.333375] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:05.034 [2024-11-10 15:16:11.333567] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71673 ] 00:07:05.293 [2024-11-10 15:16:11.466933] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:05.293 [2024-11-10 15:16:11.505861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.293 [2024-11-10 15:16:11.548558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.293 [2024-11-10 15:16:11.548631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.862 15:16:12 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:05.862 15:16:12 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:05.862 15:16:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.122 Malloc0 00:07:06.122 15:16:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:06.381 Malloc1 00:07:06.381 15:16:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.381 15:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:06.643 /dev/nbd0 00:07:06.643 15:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:06.643 15:16:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.643 1+0 records in 00:07:06.643 1+0 records out 00:07:06.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528423 s, 7.8 MB/s 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:06.643 15:16:12 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:06.643 15:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.643 15:16:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.644 15:16:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:06.903 /dev/nbd1 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:06.903 1+0 records in 00:07:06.903 1+0 records out 00:07:06.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273021 s, 15.0 MB/s 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:06.903 15:16:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.903 15:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:07.163 { 00:07:07.163 "nbd_device": "/dev/nbd0", 00:07:07.163 "bdev_name": "Malloc0" 00:07:07.163 }, 00:07:07.163 { 00:07:07.163 "nbd_device": "/dev/nbd1", 00:07:07.163 "bdev_name": "Malloc1" 00:07:07.163 } 00:07:07.163 ]' 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:07.163 { 00:07:07.163 "nbd_device": "/dev/nbd0", 00:07:07.163 "bdev_name": "Malloc0" 00:07:07.163 }, 00:07:07.163 { 00:07:07.163 "nbd_device": "/dev/nbd1", 00:07:07.163 "bdev_name": "Malloc1" 00:07:07.163 } 00:07:07.163 ]' 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:07.163 /dev/nbd1' 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:07.163 /dev/nbd1' 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.163 15:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:07.164 256+0 records in 00:07:07.164 256+0 records out 00:07:07.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143147 s, 73.3 MB/s 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:07.164 256+0 records in 00:07:07.164 256+0 records out 00:07:07.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257448 s, 40.7 MB/s 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:07.164 256+0 records in 00:07:07.164 256+0 records out 00:07:07.164 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290381 s, 36.1 MB/s 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.164 15:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.423 15:16:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.683 15:16:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.942 15:16:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:07.943 15:16:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:07.943 15:16:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:08.202 15:16:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:08.462 [2024-11-10 15:16:14.717284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.462 [2024-11-10 15:16:14.756220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.462 [2024-11-10 15:16:14.756223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.722 [2024-11-10 15:16:14.831775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:08.722 [2024-11-10 15:16:14.831863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:11.262 spdk_app_start Round 1 00:07:11.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.262 15:16:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.262 15:16:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:11.262 15:16:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71673 /var/tmp/spdk-nbd.sock 00:07:11.262 15:16:17 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71673 ']' 00:07:11.262 15:16:17 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.262 15:16:17 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.262 15:16:17 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.262 15:16:17 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.262 15:16:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.522 15:16:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.522 15:16:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:11.522 15:16:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.522 Malloc0 00:07:11.782 15:16:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.782 Malloc1 00:07:11.782 15:16:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.782 15:16:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.042 /dev/nbd0 00:07:12.042 15:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.042 15:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.042 1+0 records in 00:07:12.042 1+0 records out 00:07:12.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475263 s, 8.6 MB/s 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:12.042 15:16:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:12.042 15:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.042 15:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.042 15:16:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:12.302 /dev/nbd1 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.302 1+0 records in 00:07:12.302 1+0 records out 00:07:12.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393732 s, 10.4 MB/s 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:12.302 15:16:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.302 15:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.562 { 00:07:12.562 "nbd_device": "/dev/nbd0", 00:07:12.562 "bdev_name": "Malloc0" 00:07:12.562 }, 00:07:12.562 { 00:07:12.562 "nbd_device": "/dev/nbd1", 00:07:12.562 "bdev_name": "Malloc1" 00:07:12.562 } 00:07:12.562 ]' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.562 { 00:07:12.562 "nbd_device": "/dev/nbd0", 00:07:12.562 "bdev_name": "Malloc0" 00:07:12.562 }, 00:07:12.562 { 00:07:12.562 "nbd_device": "/dev/nbd1", 00:07:12.562 "bdev_name": "Malloc1" 00:07:12.562 } 00:07:12.562 ]' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:12.562 /dev/nbd1' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:12.562 /dev/nbd1' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:12.562 256+0 records in 00:07:12.562 256+0 records out 00:07:12.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00519211 s, 202 MB/s 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.562 15:16:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:12.563 256+0 records in 00:07:12.563 256+0 records out 00:07:12.563 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223687 s, 46.9 MB/s 00:07:12.563 15:16:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:12.563 15:16:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:12.821 256+0 records in 00:07:12.821 256+0 records out 00:07:12.821 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208756 s, 50.2 MB/s 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.821 15:16:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.080 15:16:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.340 15:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:13.599 15:16:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:13.599 15:16:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:13.599 15:16:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:13.858 [2024-11-10 15:16:20.183340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.118 [2024-11-10 15:16:20.221497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.118 [2024-11-10 15:16:20.221505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.118 [2024-11-10 15:16:20.296943] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.118 [2024-11-10 15:16:20.297022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.657 spdk_app_start Round 2 00:07:16.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.657 15:16:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.657 15:16:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:16.657 15:16:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71673 /var/tmp/spdk-nbd.sock 00:07:16.657 15:16:22 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71673 ']' 00:07:16.657 15:16:22 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.657 15:16:22 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.657 15:16:22 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.657 15:16:22 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.657 15:16:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.917 15:16:23 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:16.917 15:16:23 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:16.917 15:16:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.177 Malloc0 00:07:17.177 15:16:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:17.436 Malloc1 00:07:17.436 15:16:23 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:17.436 /dev/nbd0 00:07:17.436 15:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:17.695 15:16:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:17.695 15:16:23 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:17.695 15:16:23 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:17.695 15:16:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:17.695 15:16:23 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.696 1+0 records in 00:07:17.696 1+0 records out 00:07:17.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000455505 s, 9.0 MB/s 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:17.696 15:16:23 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:17.696 15:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.696 15:16:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.696 15:16:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:17.696 /dev/nbd1 00:07:17.696 15:16:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:17.696 15:16:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:17.696 1+0 records in 00:07:17.696 1+0 records out 00:07:17.696 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290205 s, 14.1 MB/s 00:07:17.696 15:16:24 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.955 15:16:24 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:07:17.955 15:16:24 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:17.956 15:16:24 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:17.956 15:16:24 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.956 { 00:07:17.956 "nbd_device": "/dev/nbd0", 00:07:17.956 "bdev_name": "Malloc0" 00:07:17.956 }, 00:07:17.956 { 00:07:17.956 "nbd_device": "/dev/nbd1", 00:07:17.956 "bdev_name": "Malloc1" 00:07:17.956 } 00:07:17.956 ]' 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.956 { 00:07:17.956 "nbd_device": "/dev/nbd0", 00:07:17.956 "bdev_name": "Malloc0" 00:07:17.956 }, 00:07:17.956 { 00:07:17.956 "nbd_device": "/dev/nbd1", 00:07:17.956 "bdev_name": "Malloc1" 00:07:17.956 } 00:07:17.956 ]' 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.956 /dev/nbd1' 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.956 /dev/nbd1' 00:07:17.956 15:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:18.216 256+0 records in 00:07:18.216 256+0 records out 00:07:18.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130525 s, 80.3 MB/s 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:18.216 256+0 records in 00:07:18.216 256+0 records out 00:07:18.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207694 s, 50.5 MB/s 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:18.216 256+0 records in 00:07:18.216 256+0 records out 00:07:18.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270924 s, 38.7 MB/s 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.216 15:16:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.475 15:16:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:18.476 15:16:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.761 15:16:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:18.761 15:16:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:18.761 15:16:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:18.761 15:16:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:18.761 15:16:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:18.761 15:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:18.761 15:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.028 15:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:19.028 15:16:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:19.028 15:16:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:19.028 15:16:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:19.028 15:16:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:19.028 15:16:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:19.028 15:16:25 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:19.028 15:16:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:19.288 [2024-11-10 15:16:25.617119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:19.548 [2024-11-10 15:16:25.655257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.548 [2024-11-10 15:16:25.655262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.548 [2024-11-10 15:16:25.731947] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:19.548 [2024-11-10 15:16:25.732059] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:22.087 15:16:28 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71673 /var/tmp/spdk-nbd.sock 00:07:22.087 15:16:28 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 71673 ']' 00:07:22.087 15:16:28 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:22.087 15:16:28 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:22.087 15:16:28 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:22.087 15:16:28 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.087 15:16:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:07:22.347 15:16:28 event.app_repeat -- event/event.sh@39 -- # killprocess 71673 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 71673 ']' 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 71673 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71673 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:22.347 killing process with pid 71673 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71673' 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@971 -- # kill 71673 00:07:22.347 15:16:28 event.app_repeat -- common/autotest_common.sh@976 -- # wait 71673 00:07:22.606 spdk_app_start is called in Round 0. 00:07:22.606 Shutdown signal received, stop current app iteration 00:07:22.606 Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 reinitialization... 00:07:22.606 spdk_app_start is called in Round 1. 00:07:22.606 Shutdown signal received, stop current app iteration 00:07:22.606 Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 reinitialization... 00:07:22.606 spdk_app_start is called in Round 2. 00:07:22.606 Shutdown signal received, stop current app iteration 00:07:22.606 Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 reinitialization... 00:07:22.606 spdk_app_start is called in Round 3. 00:07:22.606 Shutdown signal received, stop current app iteration 00:07:22.606 15:16:28 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:22.606 15:16:28 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:22.606 00:07:22.606 real 0m17.619s 00:07:22.606 user 0m38.411s 00:07:22.606 sys 0m2.966s 00:07:22.606 15:16:28 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.606 15:16:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:22.606 ************************************ 00:07:22.606 END TEST app_repeat 00:07:22.606 ************************************ 00:07:22.606 15:16:28 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:22.606 15:16:28 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.606 15:16:28 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.606 15:16:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.606 15:16:28 event -- common/autotest_common.sh@10 -- # set +x 00:07:22.606 ************************************ 00:07:22.606 START TEST cpu_locks 00:07:22.606 ************************************ 00:07:22.606 15:16:28 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:22.867 * Looking for test storage... 00:07:22.867 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.867 15:16:29 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.867 --rc genhtml_branch_coverage=1 00:07:22.867 --rc genhtml_function_coverage=1 00:07:22.867 --rc genhtml_legend=1 00:07:22.867 --rc geninfo_all_blocks=1 00:07:22.867 --rc geninfo_unexecuted_blocks=1 00:07:22.867 00:07:22.867 ' 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.867 --rc genhtml_branch_coverage=1 00:07:22.867 --rc genhtml_function_coverage=1 00:07:22.867 --rc genhtml_legend=1 00:07:22.867 --rc geninfo_all_blocks=1 00:07:22.867 --rc geninfo_unexecuted_blocks=1 00:07:22.867 00:07:22.867 ' 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.867 --rc genhtml_branch_coverage=1 00:07:22.867 --rc genhtml_function_coverage=1 00:07:22.867 --rc genhtml_legend=1 00:07:22.867 --rc geninfo_all_blocks=1 00:07:22.867 --rc geninfo_unexecuted_blocks=1 00:07:22.867 00:07:22.867 ' 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.867 --rc genhtml_branch_coverage=1 00:07:22.867 --rc genhtml_function_coverage=1 00:07:22.867 --rc genhtml_legend=1 00:07:22.867 --rc geninfo_all_blocks=1 00:07:22.867 --rc geninfo_unexecuted_blocks=1 00:07:22.867 00:07:22.867 ' 00:07:22.867 15:16:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:22.867 15:16:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:22.867 15:16:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:22.867 15:16:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.867 15:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.867 ************************************ 00:07:22.867 START TEST default_locks 00:07:22.867 ************************************ 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=72108 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 72108 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 72108 ']' 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:22.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:22.867 15:16:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:23.127 [2024-11-10 15:16:29.295274] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:23.127 [2024-11-10 15:16:29.295414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72108 ] 00:07:23.127 [2024-11-10 15:16:29.428197] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.127 [2024-11-10 15:16:29.467602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.387 [2024-11-10 15:16:29.511113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 72108 ']' 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.956 killing process with pid 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72108' 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 72108 00:07:23.956 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 72108 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 72108 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72108 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 72108 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 72108 ']' 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.895 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72108) - No such process 00:07:24.895 ERROR: process (pid: 72108) is no longer running 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:24.895 00:07:24.895 real 0m1.720s 00:07:24.895 user 0m1.571s 00:07:24.895 sys 0m0.605s 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.895 15:16:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.895 ************************************ 00:07:24.895 END TEST default_locks 00:07:24.895 ************************************ 00:07:24.895 15:16:30 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:24.895 15:16:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:24.895 15:16:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.895 15:16:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.895 ************************************ 00:07:24.895 START TEST default_locks_via_rpc 00:07:24.895 ************************************ 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=72156 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 72156 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72156 ']' 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.895 15:16:30 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:24.895 [2024-11-10 15:16:31.091552] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:24.895 [2024-11-10 15:16:31.091689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72156 ] 00:07:24.895 [2024-11-10 15:16:31.230121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:25.156 [2024-11-10 15:16:31.267527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.156 [2024-11-10 15:16:31.308509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.725 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.725 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 72156 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 72156 00:07:25.726 15:16:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.987 15:16:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 72156 00:07:25.987 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 72156 ']' 00:07:25.988 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 72156 00:07:25.988 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:07:25.988 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:26.247 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72156 00:07:26.247 killing process with pid 72156 00:07:26.247 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:26.247 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:26.247 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72156' 00:07:26.247 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 72156 00:07:26.247 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 72156 00:07:26.817 ************************************ 00:07:26.817 END TEST default_locks_via_rpc 00:07:26.817 ************************************ 00:07:26.817 00:07:26.817 real 0m2.003s 00:07:26.817 user 0m1.869s 00:07:26.817 sys 0m0.736s 00:07:26.817 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.817 15:16:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.817 15:16:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:26.817 15:16:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:26.817 15:16:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.817 15:16:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:26.817 ************************************ 00:07:26.817 START TEST non_locking_app_on_locked_coremask 00:07:26.817 ************************************ 00:07:26.817 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:07:26.817 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=72207 00:07:26.817 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:26.817 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 72207 /var/tmp/spdk.sock 00:07:26.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.818 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72207 ']' 00:07:26.818 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.818 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.818 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.818 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.818 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:26.818 [2024-11-10 15:16:33.164924] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:26.818 [2024-11-10 15:16:33.165224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72207 ] 00:07:27.078 [2024-11-10 15:16:33.304957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.078 [2024-11-10 15:16:33.340512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.078 [2024-11-10 15:16:33.382104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=72219 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 72219 /var/tmp/spdk2.sock 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72219 ']' 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:27.647 15:16:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.907 [2024-11-10 15:16:34.031676] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:27.907 [2024-11-10 15:16:34.031912] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72219 ] 00:07:27.907 [2024-11-10 15:16:34.164384] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.907 [2024-11-10 15:16:34.201023] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:27.907 [2024-11-10 15:16:34.205161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.167 [2024-11-10 15:16:34.292447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.736 15:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:28.736 15:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:28.736 15:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 72207 00:07:28.736 15:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72207 00:07:28.736 15:16:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.996 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 72207 00:07:28.996 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72207 ']' 00:07:28.996 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72207 00:07:28.996 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72207 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:29.255 killing process with pid 72207 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72207' 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72207 00:07:29.255 15:16:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72207 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 72219 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72219 ']' 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72219 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72219 00:07:30.651 killing process with pid 72219 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72219' 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72219 00:07:30.651 15:16:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72219 00:07:31.221 ************************************ 00:07:31.221 END TEST non_locking_app_on_locked_coremask 00:07:31.221 ************************************ 00:07:31.221 00:07:31.221 real 0m4.221s 00:07:31.221 user 0m4.091s 00:07:31.221 sys 0m1.318s 00:07:31.221 15:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:31.221 15:16:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 15:16:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:31.221 15:16:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:31.221 15:16:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:31.221 15:16:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 ************************************ 00:07:31.221 START TEST locking_app_on_unlocked_coremask 00:07:31.221 ************************************ 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72293 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72293 /var/tmp/spdk.sock 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72293 ']' 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.221 15:16:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:31.221 [2024-11-10 15:16:37.452732] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:31.221 [2024-11-10 15:16:37.452954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72293 ] 00:07:31.479 [2024-11-10 15:16:37.592484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.479 [2024-11-10 15:16:37.631089] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:31.479 [2024-11-10 15:16:37.631173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.479 [2024-11-10 15:16:37.671749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72304 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72304 /var/tmp/spdk2.sock 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72304 ']' 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.048 15:16:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.048 [2024-11-10 15:16:38.396546] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:32.048 [2024-11-10 15:16:38.396764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72304 ] 00:07:32.307 [2024-11-10 15:16:38.532775] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:32.307 [2024-11-10 15:16:38.570100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.307 [2024-11-10 15:16:38.658350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.263 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.263 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:33.263 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72304 00:07:33.263 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72304 00:07:33.263 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.520 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72293 00:07:33.520 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72293 ']' 00:07:33.520 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 72293 00:07:33.520 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:33.521 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:33.779 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72293 00:07:33.779 killing process with pid 72293 00:07:33.779 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:33.779 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:33.779 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72293' 00:07:33.779 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 72293 00:07:33.779 15:16:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 72293 00:07:35.158 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72304 00:07:35.158 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72304 ']' 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 72304 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72304 00:07:35.159 killing process with pid 72304 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72304' 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 72304 00:07:35.159 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 72304 00:07:35.728 00:07:35.728 real 0m4.447s 00:07:35.728 user 0m4.372s 00:07:35.728 sys 0m1.412s 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:35.728 ************************************ 00:07:35.728 END TEST locking_app_on_unlocked_coremask 00:07:35.728 ************************************ 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.728 15:16:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:35.728 15:16:41 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:35.728 15:16:41 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:35.728 15:16:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.728 ************************************ 00:07:35.728 START TEST locking_app_on_locked_coremask 00:07:35.728 ************************************ 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72386 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72386 /var/tmp/spdk.sock 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72386 ']' 00:07:35.728 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.729 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:35.729 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.729 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:35.729 15:16:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.729 [2024-11-10 15:16:41.975636] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:35.729 [2024-11-10 15:16:41.975908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72386 ] 00:07:35.989 [2024-11-10 15:16:42.116048] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:35.989 [2024-11-10 15:16:42.152897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.989 [2024-11-10 15:16:42.196911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72402 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72402 /var/tmp/spdk2.sock 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72402 /var/tmp/spdk2.sock 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72402 /var/tmp/spdk2.sock 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 72402 ']' 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:36.558 15:16:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.558 [2024-11-10 15:16:42.874914] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:36.558 [2024-11-10 15:16:42.875583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72402 ] 00:07:36.818 [2024-11-10 15:16:43.009596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.818 [2024-11-10 15:16:43.042523] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72386 has claimed it. 00:07:36.818 [2024-11-10 15:16:43.042577] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:37.387 ERROR: process (pid: 72402) is no longer running 00:07:37.387 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72402) - No such process 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72386 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72386 00:07:37.387 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72386 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 72386 ']' 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 72386 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72386 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72386' 00:07:37.649 killing process with pid 72386 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 72386 00:07:37.649 15:16:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 72386 00:07:38.221 00:07:38.221 real 0m2.620s 00:07:38.221 user 0m2.611s 00:07:38.221 sys 0m0.893s 00:07:38.221 15:16:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.221 15:16:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.221 ************************************ 00:07:38.221 END TEST locking_app_on_locked_coremask 00:07:38.221 ************************************ 00:07:38.221 15:16:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:38.221 15:16:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:38.221 15:16:44 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.221 15:16:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.221 ************************************ 00:07:38.221 START TEST locking_overlapped_coremask 00:07:38.221 ************************************ 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72444 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72444 /var/tmp/spdk.sock 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 72444 ']' 00:07:38.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:38.221 15:16:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.481 [2024-11-10 15:16:44.669835] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:38.481 [2024-11-10 15:16:44.670045] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72444 ] 00:07:38.481 [2024-11-10 15:16:44.808961] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:38.740 [2024-11-10 15:16:44.846082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.741 [2024-11-10 15:16:44.894123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.741 [2024-11-10 15:16:44.894185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.741 [2024-11-10 15:16:44.894319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72462 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72462 /var/tmp/spdk2.sock 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 72462 /var/tmp/spdk2.sock 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 72462 /var/tmp/spdk2.sock 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 72462 ']' 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:39.310 15:16:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.310 [2024-11-10 15:16:45.552004] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:39.310 [2024-11-10 15:16:45.552220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72462 ] 00:07:39.570 [2024-11-10 15:16:45.687113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.570 [2024-11-10 15:16:45.719880] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72444 has claimed it. 00:07:39.570 [2024-11-10 15:16:45.719946] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:40.140 ERROR: process (pid: 72462) is no longer running 00:07:40.140 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (72462) - No such process 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72444 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 72444 ']' 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 72444 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72444 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72444' 00:07:40.140 killing process with pid 72444 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 72444 00:07:40.140 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 72444 00:07:40.710 00:07:40.710 real 0m2.314s 00:07:40.710 user 0m5.948s 00:07:40.710 sys 0m0.703s 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.710 ************************************ 00:07:40.710 END TEST locking_overlapped_coremask 00:07:40.710 ************************************ 00:07:40.710 15:16:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:40.710 15:16:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:40.710 15:16:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:40.710 15:16:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.710 ************************************ 00:07:40.710 START TEST locking_overlapped_coremask_via_rpc 00:07:40.710 ************************************ 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72515 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72515 /var/tmp/spdk.sock 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72515 ']' 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:40.710 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.711 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:40.711 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.711 15:16:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:40.711 [2024-11-10 15:16:47.039114] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:40.711 [2024-11-10 15:16:47.039363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72515 ] 00:07:40.970 [2024-11-10 15:16:47.180441] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.970 [2024-11-10 15:16:47.217943] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:40.970 [2024-11-10 15:16:47.218126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:40.970 [2024-11-10 15:16:47.262268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.970 [2024-11-10 15:16:47.262377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.970 [2024-11-10 15:16:47.262483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72532 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72532 /var/tmp/spdk2.sock 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72532 ']' 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.540 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:41.541 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.541 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:41.541 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:41.541 15:16:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:41.800 [2024-11-10 15:16:47.954423] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:41.800 [2024-11-10 15:16:47.954545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72532 ] 00:07:41.800 [2024-11-10 15:16:48.087020] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.800 [2024-11-10 15:16:48.121295] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:41.800 [2024-11-10 15:16:48.121362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.059 [2024-11-10 15:16:48.183433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.059 [2024-11-10 15:16:48.189212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.059 [2024-11-10 15:16:48.189317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.629 [2024-11-10 15:16:48.849250] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72515 has claimed it. 00:07:42.629 request: 00:07:42.629 { 00:07:42.629 "method": "framework_enable_cpumask_locks", 00:07:42.629 "req_id": 1 00:07:42.629 } 00:07:42.629 Got JSON-RPC error response 00:07:42.629 response: 00:07:42.629 { 00:07:42.629 "code": -32603, 00:07:42.629 "message": "Failed to claim CPU core: 2" 00:07:42.629 } 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72515 /var/tmp/spdk.sock 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72515 ']' 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.629 15:16:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72532 /var/tmp/spdk2.sock 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 72532 ']' 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:42.889 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:43.149 00:07:43.149 real 0m2.347s 00:07:43.149 user 0m1.101s 00:07:43.149 sys 0m0.180s 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.149 15:16:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.149 ************************************ 00:07:43.149 END TEST locking_overlapped_coremask_via_rpc 00:07:43.149 ************************************ 00:07:43.149 15:16:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:43.149 15:16:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72515 ]] 00:07:43.149 15:16:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72515 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72515 ']' 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72515 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72515 00:07:43.149 killing process with pid 72515 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72515' 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 72515 00:07:43.149 15:16:49 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 72515 00:07:43.717 15:16:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72532 ]] 00:07:43.717 15:16:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72532 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72532 ']' 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72532 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72532 00:07:43.717 killing process with pid 72532 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72532' 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 72532 00:07:43.717 15:16:50 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 72532 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:44.286 Process with pid 72515 is not found 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72515 ]] 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72515 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72515 ']' 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72515 00:07:44.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72515) - No such process 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 72515 is not found' 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72532 ]] 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72532 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 72532 ']' 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 72532 00:07:44.286 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72532) - No such process 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 72532 is not found' 00:07:44.286 Process with pid 72532 is not found 00:07:44.286 15:16:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:44.286 00:07:44.286 real 0m21.463s 00:07:44.286 user 0m34.097s 00:07:44.286 sys 0m7.131s 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.286 15:16:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.286 ************************************ 00:07:44.286 END TEST cpu_locks 00:07:44.286 ************************************ 00:07:44.286 00:07:44.286 real 0m49.721s 00:07:44.287 user 1m32.016s 00:07:44.287 sys 0m11.343s 00:07:44.287 15:16:50 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.287 15:16:50 event -- common/autotest_common.sh@10 -- # set +x 00:07:44.287 ************************************ 00:07:44.287 END TEST event 00:07:44.287 ************************************ 00:07:44.287 15:16:50 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:44.287 15:16:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:44.287 15:16:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.287 15:16:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.287 ************************************ 00:07:44.287 START TEST thread 00:07:44.287 ************************************ 00:07:44.287 15:16:50 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:44.546 * Looking for test storage... 00:07:44.546 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:44.546 15:16:50 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.546 15:16:50 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.546 15:16:50 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.546 15:16:50 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.546 15:16:50 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.546 15:16:50 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.546 15:16:50 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.546 15:16:50 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.546 15:16:50 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.546 15:16:50 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.546 15:16:50 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.546 15:16:50 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:44.546 15:16:50 thread -- scripts/common.sh@345 -- # : 1 00:07:44.546 15:16:50 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.546 15:16:50 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.546 15:16:50 thread -- scripts/common.sh@365 -- # decimal 1 00:07:44.546 15:16:50 thread -- scripts/common.sh@353 -- # local d=1 00:07:44.546 15:16:50 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.546 15:16:50 thread -- scripts/common.sh@355 -- # echo 1 00:07:44.546 15:16:50 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.546 15:16:50 thread -- scripts/common.sh@366 -- # decimal 2 00:07:44.546 15:16:50 thread -- scripts/common.sh@353 -- # local d=2 00:07:44.546 15:16:50 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.546 15:16:50 thread -- scripts/common.sh@355 -- # echo 2 00:07:44.546 15:16:50 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.546 15:16:50 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.546 15:16:50 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.546 15:16:50 thread -- scripts/common.sh@368 -- # return 0 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:44.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.546 --rc genhtml_branch_coverage=1 00:07:44.546 --rc genhtml_function_coverage=1 00:07:44.546 --rc genhtml_legend=1 00:07:44.546 --rc geninfo_all_blocks=1 00:07:44.546 --rc geninfo_unexecuted_blocks=1 00:07:44.546 00:07:44.546 ' 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:44.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.546 --rc genhtml_branch_coverage=1 00:07:44.546 --rc genhtml_function_coverage=1 00:07:44.546 --rc genhtml_legend=1 00:07:44.546 --rc geninfo_all_blocks=1 00:07:44.546 --rc geninfo_unexecuted_blocks=1 00:07:44.546 00:07:44.546 ' 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:44.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.546 --rc genhtml_branch_coverage=1 00:07:44.546 --rc genhtml_function_coverage=1 00:07:44.546 --rc genhtml_legend=1 00:07:44.546 --rc geninfo_all_blocks=1 00:07:44.546 --rc geninfo_unexecuted_blocks=1 00:07:44.546 00:07:44.546 ' 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:44.546 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.546 --rc genhtml_branch_coverage=1 00:07:44.546 --rc genhtml_function_coverage=1 00:07:44.546 --rc genhtml_legend=1 00:07:44.546 --rc geninfo_all_blocks=1 00:07:44.546 --rc geninfo_unexecuted_blocks=1 00:07:44.546 00:07:44.546 ' 00:07:44.546 15:16:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.546 15:16:50 thread -- common/autotest_common.sh@10 -- # set +x 00:07:44.546 ************************************ 00:07:44.546 START TEST thread_poller_perf 00:07:44.546 ************************************ 00:07:44.546 15:16:50 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:44.546 [2024-11-10 15:16:50.842786] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:44.546 [2024-11-10 15:16:50.843026] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72660 ] 00:07:44.806 [2024-11-10 15:16:50.980826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.806 [2024-11-10 15:16:51.018598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.806 [2024-11-10 15:16:51.061219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.806 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:46.200 [2024-11-10T15:16:52.563Z] ====================================== 00:07:46.200 [2024-11-10T15:16:52.563Z] busy:2302568798 (cyc) 00:07:46.200 [2024-11-10T15:16:52.563Z] total_run_count: 406000 00:07:46.200 [2024-11-10T15:16:52.563Z] tsc_hz: 2294600000 (cyc) 00:07:46.200 [2024-11-10T15:16:52.563Z] ====================================== 00:07:46.200 [2024-11-10T15:16:52.563Z] poller_cost: 5671 (cyc), 2471 (nsec) 00:07:46.200 00:07:46.200 real 0m1.370s 00:07:46.200 user 0m1.150s 00:07:46.200 ************************************ 00:07:46.200 END TEST thread_poller_perf 00:07:46.200 ************************************ 00:07:46.200 sys 0m0.113s 00:07:46.200 15:16:52 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.200 15:16:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 15:16:52 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:46.200 15:16:52 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:46.200 15:16:52 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.200 15:16:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:46.200 ************************************ 00:07:46.200 START TEST thread_poller_perf 00:07:46.200 ************************************ 00:07:46.200 15:16:52 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:46.200 [2024-11-10 15:16:52.269794] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:46.200 [2024-11-10 15:16:52.270315] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72702 ] 00:07:46.200 [2024-11-10 15:16:52.402617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.200 [2024-11-10 15:16:52.443252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.200 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:46.200 [2024-11-10 15:16:52.486502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.583 [2024-11-10T15:16:53.946Z] ====================================== 00:07:47.583 [2024-11-10T15:16:53.946Z] busy:2298178016 (cyc) 00:07:47.583 [2024-11-10T15:16:53.946Z] total_run_count: 5294000 00:07:47.583 [2024-11-10T15:16:53.946Z] tsc_hz: 2294600000 (cyc) 00:07:47.583 [2024-11-10T15:16:53.946Z] ====================================== 00:07:47.583 [2024-11-10T15:16:53.946Z] poller_cost: 434 (cyc), 189 (nsec) 00:07:47.583 00:07:47.583 real 0m1.353s 00:07:47.583 user 0m1.139s 00:07:47.583 sys 0m0.107s 00:07:47.583 ************************************ 00:07:47.583 END TEST thread_poller_perf 00:07:47.583 ************************************ 00:07:47.583 15:16:53 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.583 15:16:53 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.583 15:16:53 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:47.583 ************************************ 00:07:47.583 END TEST thread 00:07:47.583 ************************************ 00:07:47.583 00:07:47.583 real 0m3.083s 00:07:47.583 user 0m2.462s 00:07:47.583 sys 0m0.421s 00:07:47.583 15:16:53 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.583 15:16:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.583 15:16:53 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:47.583 15:16:53 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.583 15:16:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:47.583 15:16:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.583 15:16:53 -- common/autotest_common.sh@10 -- # set +x 00:07:47.583 ************************************ 00:07:47.583 START TEST app_cmdline 00:07:47.583 ************************************ 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.583 * Looking for test storage... 00:07:47.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.583 15:16:53 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.583 --rc genhtml_branch_coverage=1 00:07:47.583 --rc genhtml_function_coverage=1 00:07:47.583 --rc genhtml_legend=1 00:07:47.583 --rc geninfo_all_blocks=1 00:07:47.583 --rc geninfo_unexecuted_blocks=1 00:07:47.583 00:07:47.583 ' 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.583 --rc genhtml_branch_coverage=1 00:07:47.583 --rc genhtml_function_coverage=1 00:07:47.583 --rc genhtml_legend=1 00:07:47.583 --rc geninfo_all_blocks=1 00:07:47.583 --rc geninfo_unexecuted_blocks=1 00:07:47.583 00:07:47.583 ' 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.583 --rc genhtml_branch_coverage=1 00:07:47.583 --rc genhtml_function_coverage=1 00:07:47.583 --rc genhtml_legend=1 00:07:47.583 --rc geninfo_all_blocks=1 00:07:47.583 --rc geninfo_unexecuted_blocks=1 00:07:47.583 00:07:47.583 ' 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.583 --rc genhtml_branch_coverage=1 00:07:47.583 --rc genhtml_function_coverage=1 00:07:47.583 --rc genhtml_legend=1 00:07:47.583 --rc geninfo_all_blocks=1 00:07:47.583 --rc geninfo_unexecuted_blocks=1 00:07:47.583 00:07:47.583 ' 00:07:47.583 15:16:53 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:47.583 15:16:53 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72783 00:07:47.583 15:16:53 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:47.583 15:16:53 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72783 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 72783 ']' 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.583 15:16:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:47.843 [2024-11-10 15:16:54.026690] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:47.843 [2024-11-10 15:16:54.026848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72783 ] 00:07:47.843 [2024-11-10 15:16:54.166434] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.103 [2024-11-10 15:16:54.204317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.103 [2024-11-10 15:16:54.244925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.672 15:16:54 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.672 15:16:54 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:48.672 15:16:54 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:48.672 { 00:07:48.672 "version": "SPDK v25.01-pre git sha1 06bc8ce53", 00:07:48.672 "fields": { 00:07:48.672 "major": 25, 00:07:48.672 "minor": 1, 00:07:48.672 "patch": 0, 00:07:48.672 "suffix": "-pre", 00:07:48.672 "commit": "06bc8ce53" 00:07:48.672 } 00:07:48.672 } 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:48.672 15:16:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:48.673 15:16:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:48.673 15:16:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.933 15:16:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:48.933 15:16:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:48.933 15:16:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.933 request: 00:07:48.933 { 00:07:48.933 "method": "env_dpdk_get_mem_stats", 00:07:48.933 "req_id": 1 00:07:48.933 } 00:07:48.933 Got JSON-RPC error response 00:07:48.933 response: 00:07:48.933 { 00:07:48.933 "code": -32601, 00:07:48.933 "message": "Method not found" 00:07:48.933 } 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:48.933 15:16:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72783 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 72783 ']' 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 72783 00:07:48.933 15:16:55 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72783 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72783' 00:07:49.193 killing process with pid 72783 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@971 -- # kill 72783 00:07:49.193 15:16:55 app_cmdline -- common/autotest_common.sh@976 -- # wait 72783 00:07:49.761 00:07:49.761 real 0m2.246s 00:07:49.761 user 0m2.334s 00:07:49.761 sys 0m0.700s 00:07:49.761 15:16:55 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.761 15:16:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.761 ************************************ 00:07:49.761 END TEST app_cmdline 00:07:49.761 ************************************ 00:07:49.761 15:16:56 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.761 15:16:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.761 15:16:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.761 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:49.761 ************************************ 00:07:49.761 START TEST version 00:07:49.761 ************************************ 00:07:49.761 15:16:56 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:50.021 * Looking for test storage... 00:07:50.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:50.021 15:16:56 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.021 15:16:56 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.021 15:16:56 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.021 15:16:56 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.021 15:16:56 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.021 15:16:56 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.021 15:16:56 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.021 15:16:56 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.021 15:16:56 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.021 15:16:56 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.021 15:16:56 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.021 15:16:56 version -- scripts/common.sh@344 -- # case "$op" in 00:07:50.021 15:16:56 version -- scripts/common.sh@345 -- # : 1 00:07:50.021 15:16:56 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.021 15:16:56 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.021 15:16:56 version -- scripts/common.sh@365 -- # decimal 1 00:07:50.021 15:16:56 version -- scripts/common.sh@353 -- # local d=1 00:07:50.021 15:16:56 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.021 15:16:56 version -- scripts/common.sh@355 -- # echo 1 00:07:50.021 15:16:56 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.021 15:16:56 version -- scripts/common.sh@366 -- # decimal 2 00:07:50.021 15:16:56 version -- scripts/common.sh@353 -- # local d=2 00:07:50.021 15:16:56 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.021 15:16:56 version -- scripts/common.sh@355 -- # echo 2 00:07:50.021 15:16:56 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.021 15:16:56 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.021 15:16:56 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.021 15:16:56 version -- scripts/common.sh@368 -- # return 0 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:50.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.021 --rc genhtml_branch_coverage=1 00:07:50.021 --rc genhtml_function_coverage=1 00:07:50.021 --rc genhtml_legend=1 00:07:50.021 --rc geninfo_all_blocks=1 00:07:50.021 --rc geninfo_unexecuted_blocks=1 00:07:50.021 00:07:50.021 ' 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:50.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.021 --rc genhtml_branch_coverage=1 00:07:50.021 --rc genhtml_function_coverage=1 00:07:50.021 --rc genhtml_legend=1 00:07:50.021 --rc geninfo_all_blocks=1 00:07:50.021 --rc geninfo_unexecuted_blocks=1 00:07:50.021 00:07:50.021 ' 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:50.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.021 --rc genhtml_branch_coverage=1 00:07:50.021 --rc genhtml_function_coverage=1 00:07:50.021 --rc genhtml_legend=1 00:07:50.021 --rc geninfo_all_blocks=1 00:07:50.021 --rc geninfo_unexecuted_blocks=1 00:07:50.021 00:07:50.021 ' 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:50.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.021 --rc genhtml_branch_coverage=1 00:07:50.021 --rc genhtml_function_coverage=1 00:07:50.021 --rc genhtml_legend=1 00:07:50.021 --rc geninfo_all_blocks=1 00:07:50.021 --rc geninfo_unexecuted_blocks=1 00:07:50.021 00:07:50.021 ' 00:07:50.021 15:16:56 version -- app/version.sh@17 -- # get_header_version major 00:07:50.021 15:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # cut -f2 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.021 15:16:56 version -- app/version.sh@17 -- # major=25 00:07:50.021 15:16:56 version -- app/version.sh@18 -- # get_header_version minor 00:07:50.021 15:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # cut -f2 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.021 15:16:56 version -- app/version.sh@18 -- # minor=1 00:07:50.021 15:16:56 version -- app/version.sh@19 -- # get_header_version patch 00:07:50.021 15:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # cut -f2 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.021 15:16:56 version -- app/version.sh@19 -- # patch=0 00:07:50.021 15:16:56 version -- app/version.sh@20 -- # get_header_version suffix 00:07:50.021 15:16:56 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # cut -f2 00:07:50.021 15:16:56 version -- app/version.sh@14 -- # tr -d '"' 00:07:50.021 15:16:56 version -- app/version.sh@20 -- # suffix=-pre 00:07:50.021 15:16:56 version -- app/version.sh@22 -- # version=25.1 00:07:50.021 15:16:56 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:50.021 15:16:56 version -- app/version.sh@28 -- # version=25.1rc0 00:07:50.021 15:16:56 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:50.021 15:16:56 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:50.021 15:16:56 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:50.021 15:16:56 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:50.021 ************************************ 00:07:50.021 END TEST version 00:07:50.021 ************************************ 00:07:50.021 00:07:50.021 real 0m0.301s 00:07:50.021 user 0m0.185s 00:07:50.021 sys 0m0.172s 00:07:50.021 15:16:56 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.021 15:16:56 version -- common/autotest_common.sh@10 -- # set +x 00:07:50.021 15:16:56 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:50.021 15:16:56 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:50.021 15:16:56 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:50.021 15:16:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.021 15:16:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.021 15:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:50.021 ************************************ 00:07:50.021 START TEST bdev_raid 00:07:50.021 ************************************ 00:07:50.021 15:16:56 bdev_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:50.281 * Looking for test storage... 00:07:50.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.281 15:16:56 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:50.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.281 --rc genhtml_branch_coverage=1 00:07:50.281 --rc genhtml_function_coverage=1 00:07:50.281 --rc genhtml_legend=1 00:07:50.281 --rc geninfo_all_blocks=1 00:07:50.281 --rc geninfo_unexecuted_blocks=1 00:07:50.281 00:07:50.281 ' 00:07:50.281 15:16:56 bdev_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:50.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.281 --rc genhtml_branch_coverage=1 00:07:50.281 --rc genhtml_function_coverage=1 00:07:50.282 --rc genhtml_legend=1 00:07:50.282 --rc geninfo_all_blocks=1 00:07:50.282 --rc geninfo_unexecuted_blocks=1 00:07:50.282 00:07:50.282 ' 00:07:50.282 15:16:56 bdev_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:50.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.282 --rc genhtml_branch_coverage=1 00:07:50.282 --rc genhtml_function_coverage=1 00:07:50.282 --rc genhtml_legend=1 00:07:50.282 --rc geninfo_all_blocks=1 00:07:50.282 --rc geninfo_unexecuted_blocks=1 00:07:50.282 00:07:50.282 ' 00:07:50.282 15:16:56 bdev_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:50.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.282 --rc genhtml_branch_coverage=1 00:07:50.282 --rc genhtml_function_coverage=1 00:07:50.282 --rc genhtml_legend=1 00:07:50.282 --rc geninfo_all_blocks=1 00:07:50.282 --rc geninfo_unexecuted_blocks=1 00:07:50.282 00:07:50.282 ' 00:07:50.282 15:16:56 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:50.282 15:16:56 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:50.282 15:16:56 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:50.282 15:16:56 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:50.282 15:16:56 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:50.282 15:16:56 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:50.282 15:16:56 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:50.282 15:16:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.282 15:16:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.282 15:16:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.282 ************************************ 00:07:50.282 START TEST raid1_resize_data_offset_test 00:07:50.282 ************************************ 00:07:50.282 Process raid pid: 72951 00:07:50.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1127 -- # raid_resize_data_offset_test 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=72951 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 72951' 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 72951 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@833 -- # '[' -z 72951 ']' 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:50.282 15:16:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.541 [2024-11-10 15:16:56.714328] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:50.541 [2024-11-10 15:16:56.714554] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.541 [2024-11-10 15:16:56.849075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.541 [2024-11-10 15:16:56.884175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.801 [2024-11-10 15:16:56.923748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.801 [2024-11-10 15:16:57.000956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.801 [2024-11-10 15:16:57.001106] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@866 -- # return 0 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.370 malloc0 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.370 malloc1 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.370 null0 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.370 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.370 [2024-11-10 15:16:57.638036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:51.370 [2024-11-10 15:16:57.640064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:51.370 [2024-11-10 15:16:57.640147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:51.370 [2024-11-10 15:16:57.640309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:51.370 [2024-11-10 15:16:57.640359] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:51.370 [2024-11-10 15:16:57.640656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:51.370 [2024-11-10 15:16:57.640834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:51.371 [2024-11-10 15:16:57.640873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:51.371 [2024-11-10 15:16:57.641050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.371 [2024-11-10 15:16:57.698004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.371 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.631 malloc2 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.631 [2024-11-10 15:16:57.910752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:51.631 [2024-11-10 15:16:57.919959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.631 [2024-11-10 15:16:57.922174] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 72951 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@952 -- # '[' -z 72951 ']' 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # kill -0 72951 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # uname 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:51.631 15:16:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72951 00:07:51.891 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:51.891 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:51.891 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72951' 00:07:51.891 killing process with pid 72951 00:07:51.891 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@971 -- # kill 72951 00:07:51.891 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@976 -- # wait 72951 00:07:51.891 [2024-11-10 15:16:58.018493] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.891 [2024-11-10 15:16:58.019402] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:51.891 [2024-11-10 15:16:58.019465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.891 [2024-11-10 15:16:58.019489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:51.891 [2024-11-10 15:16:58.029302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.891 [2024-11-10 15:16:58.029647] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.891 [2024-11-10 15:16:58.029662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:52.151 [2024-11-10 15:16:58.424121] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.411 15:16:58 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:52.411 00:07:52.411 real 0m2.122s 00:07:52.411 user 0m1.939s 00:07:52.411 sys 0m0.627s 00:07:52.411 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:52.411 15:16:58 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.411 ************************************ 00:07:52.411 END TEST raid1_resize_data_offset_test 00:07:52.411 ************************************ 00:07:52.688 15:16:58 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:52.688 15:16:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:52.688 15:16:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:52.688 15:16:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.688 ************************************ 00:07:52.688 START TEST raid0_resize_superblock_test 00:07:52.689 ************************************ 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 0 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73007 00:07:52.689 Process raid pid: 73007 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73007' 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73007 00:07:52.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 73007 ']' 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:52.689 15:16:58 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.689 [2024-11-10 15:16:58.909282] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:52.689 [2024-11-10 15:16:58.909409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.983 [2024-11-10 15:16:59.044318] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.983 [2024-11-10 15:16:59.082225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.983 [2024-11-10 15:16:59.123305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.983 [2024-11-10 15:16:59.199322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.983 [2024-11-10 15:16:59.199359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.551 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:53.551 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:53.551 15:16:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:53.551 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.551 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 malloc0 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 [2024-11-10 15:16:59.949750] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:53.810 [2024-11-10 15:16:59.949843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.810 [2024-11-10 15:16:59.949879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.810 [2024-11-10 15:16:59.949905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.810 [2024-11-10 15:16:59.952575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.810 pt0 00:07:53.810 [2024-11-10 15:16:59.952724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.810 15:16:59 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 26c0d25a-f66d-486d-afb1-b1dd01768264 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 f0cbc0e4-56d7-4132-9aa3-9a9a5d6eb9a5 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 9e5428c9-bbe1-4bd0-bb3a-97206c0f3fb1 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.810 [2024-11-10 15:17:00.158685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f0cbc0e4-56d7-4132-9aa3-9a9a5d6eb9a5 is claimed 00:07:53.810 [2024-11-10 15:17:00.158795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9e5428c9-bbe1-4bd0-bb3a-97206c0f3fb1 is claimed 00:07:53.810 [2024-11-10 15:17:00.158919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:53.810 [2024-11-10 15:17:00.158929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:53.810 [2024-11-10 15:17:00.159274] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:53.810 [2024-11-10 15:17:00.159448] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:53.810 [2024-11-10 15:17:00.159472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:53.810 [2024-11-10 15:17:00.159610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.810 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:54.070 [2024-11-10 15:17:00.270950] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 [2024-11-10 15:17:00.318967] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:54.070 [2024-11-10 15:17:00.319002] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f0cbc0e4-56d7-4132-9aa3-9a9a5d6eb9a5' was resized: old size 131072, new size 204800 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 [2024-11-10 15:17:00.330822] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:54.070 [2024-11-10 15:17:00.330853] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9e5428c9-bbe1-4bd0-bb3a-97206c0f3fb1' was resized: old size 131072, new size 204800 00:07:54.070 [2024-11-10 15:17:00.330876] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.070 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 [2024-11-10 15:17:00.447064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 [2024-11-10 15:17:00.494803] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:54.331 [2024-11-10 15:17:00.494939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:54.331 [2024-11-10 15:17:00.494971] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.331 [2024-11-10 15:17:00.495022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:54.331 [2024-11-10 15:17:00.495239] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.331 [2024-11-10 15:17:00.495342] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.331 [2024-11-10 15:17:00.495391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 [2024-11-10 15:17:00.506721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:54.331 [2024-11-10 15:17:00.506818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.331 [2024-11-10 15:17:00.506861] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:54.331 [2024-11-10 15:17:00.506890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.331 [2024-11-10 15:17:00.509430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.331 [2024-11-10 15:17:00.509497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:54.331 [2024-11-10 15:17:00.511014] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f0cbc0e4-56d7-4132-9aa3-9a9a5d6eb9a5 00:07:54.331 [2024-11-10 15:17:00.511126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f0cbc0e4-56d7-4132-9aa3-9a9a5d6eb9a5 is claimed 00:07:54.331 [2024-11-10 15:17:00.511266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9e5428c9-bbe1-4bd0-bb3a-97206c0f3fb1 00:07:54.331 [2024-11-10 15:17:00.511337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9e5428c9-bbe1-4bd0-bb3a-97206c0f3fb1 is claimed 00:07:54.331 [2024-11-10 15:17:00.511516] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9e5428c9-bbe1-4bd0-bb3a-97206c0f3fb1 (2) smaller than existing raid bdev Raid (3) 00:07:54.331 [2024-11-10 15:17:00.511579] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f0cbc0e4-56d7-4132-9aa3-9a9a5d6eb9a5: File exists 00:07:54.331 [2024-11-10 15:17:00.511670] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:54.331 [2024-11-10 15:17:00.511698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:54.331 [2024-11-10 15:17:00.511978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:54.331 pt0 00:07:54.331 [2024-11-10 15:17:00.512152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:54.331 [2024-11-10 15:17:00.512168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:54.331 [2024-11-10 15:17:00.512283] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:54.331 [2024-11-10 15:17:00.531356] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73007 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 73007 ']' 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 73007 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73007 00:07:54.331 killing process with pid 73007 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73007' 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 73007 00:07:54.331 [2024-11-10 15:17:00.614293] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:54.331 [2024-11-10 15:17:00.614384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.331 [2024-11-10 15:17:00.614427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.331 [2024-11-10 15:17:00.614440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:54.331 15:17:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 73007 00:07:54.591 [2024-11-10 15:17:00.920127] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.159 15:17:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:55.159 00:07:55.159 real 0m2.417s 00:07:55.159 user 0m2.551s 00:07:55.159 sys 0m0.654s 00:07:55.159 15:17:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.159 ************************************ 00:07:55.159 END TEST raid0_resize_superblock_test 00:07:55.159 ************************************ 00:07:55.159 15:17:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.159 15:17:01 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:55.159 15:17:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:55.159 15:17:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.159 15:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.159 ************************************ 00:07:55.159 START TEST raid1_resize_superblock_test 00:07:55.159 ************************************ 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1127 -- # raid_resize_superblock_test 1 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=73084 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 73084' 00:07:55.159 Process raid pid: 73084 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 73084 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 73084 ']' 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.159 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.160 15:17:01 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.160 [2024-11-10 15:17:01.399301] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:55.160 [2024-11-10 15:17:01.399524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.419 [2024-11-10 15:17:01.540134] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.419 [2024-11-10 15:17:01.577496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.419 [2024-11-10 15:17:01.617475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.419 [2024-11-10 15:17:01.694199] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.419 [2024-11-10 15:17:01.694236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.990 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.990 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:07:55.990 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:55.990 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.990 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.249 malloc0 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.249 [2024-11-10 15:17:02.410647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:56.249 [2024-11-10 15:17:02.410719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.249 [2024-11-10 15:17:02.410754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:56.249 [2024-11-10 15:17:02.410767] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.249 [2024-11-10 15:17:02.413206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.249 [2024-11-10 15:17:02.413244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:56.249 pt0 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.249 a4a60705-e5b1-446a-8d80-096b58942d22 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.249 cbb974a8-8afc-4a86-bd04-937626b995f3 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.249 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 3522489f-26bc-4567-90eb-43f7903c6c5f 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 [2024-11-10 15:17:02.618386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cbb974a8-8afc-4a86-bd04-937626b995f3 is claimed 00:07:56.509 [2024-11-10 15:17:02.618487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3522489f-26bc-4567-90eb-43f7903c6c5f is claimed 00:07:56.509 [2024-11-10 15:17:02.618601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:56.509 [2024-11-10 15:17:02.618615] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:56.509 [2024-11-10 15:17:02.618880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:56.509 [2024-11-10 15:17:02.619057] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:56.509 [2024-11-10 15:17:02.619075] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:07:56.509 [2024-11-10 15:17:02.619242] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 [2024-11-10 15:17:02.730637] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 [2024-11-10 15:17:02.770584] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:56.509 [2024-11-10 15:17:02.770616] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'cbb974a8-8afc-4a86-bd04-937626b995f3' was resized: old size 131072, new size 204800 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.509 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.509 [2024-11-10 15:17:02.782485] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:56.509 [2024-11-10 15:17:02.782512] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '3522489f-26bc-4567-90eb-43f7903c6c5f' was resized: old size 131072, new size 204800 00:07:56.509 [2024-11-10 15:17:02.782535] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.510 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.770 [2024-11-10 15:17:02.894605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.770 [2024-11-10 15:17:02.942586] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:56.770 [2024-11-10 15:17:02.942692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:56.770 [2024-11-10 15:17:02.942735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:56.770 [2024-11-10 15:17:02.942956] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.770 [2024-11-10 15:17:02.943188] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.770 [2024-11-10 15:17:02.943364] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.770 [2024-11-10 15:17:02.943390] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.770 [2024-11-10 15:17:02.954466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:56.770 [2024-11-10 15:17:02.954617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.770 [2024-11-10 15:17:02.954655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:56.770 [2024-11-10 15:17:02.954666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.770 [2024-11-10 15:17:02.957255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.770 [2024-11-10 15:17:02.957294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:56.770 [2024-11-10 15:17:02.958938] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev cbb974a8-8afc-4a86-bd04-937626b995f3 00:07:56.770 [2024-11-10 15:17:02.959002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev cbb974a8-8afc-4a86-bd04-937626b995f3 is claimed 00:07:56.770 [2024-11-10 15:17:02.959132] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 3522489f-26bc-4567-90eb-43f7903c6c5f 00:07:56.770 [2024-11-10 15:17:02.959157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 3522489f-26bc-4567-90eb-43f7903c6c5f is claimed 00:07:56.770 [2024-11-10 15:17:02.959317] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 3522489f-26bc-4567-90eb-43f7903c6c5f (2) smaller than existing raid bdev Raid (3) 00:07:56.770 [2024-11-10 15:17:02.959338] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev cbb974a8-8afc-4a86-bd04-937626b995f3: File exists 00:07:56.770 [2024-11-10 15:17:02.959378] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:56.770 [2024-11-10 15:17:02.959386] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:56.770 [2024-11-10 15:17:02.959649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:07:56.770 [2024-11-10 15:17:02.959791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:56.770 [2024-11-10 15:17:02.959804] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:56.770 [2024-11-10 15:17:02.959932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.770 pt0 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.770 [2024-11-10 15:17:02.983402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:56.770 15:17:02 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 73084 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 73084 ']' 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # kill -0 73084 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # uname 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73084 00:07:56.770 killing process with pid 73084 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73084' 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@971 -- # kill 73084 00:07:56.770 [2024-11-10 15:17:03.050395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.770 [2024-11-10 15:17:03.050527] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.770 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@976 -- # wait 73084 00:07:56.770 [2024-11-10 15:17:03.050591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.770 [2024-11-10 15:17:03.050608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:57.030 [2024-11-10 15:17:03.356795] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.598 15:17:03 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:57.598 00:07:57.598 real 0m2.374s 00:07:57.598 user 0m2.476s 00:07:57.598 sys 0m0.654s 00:07:57.599 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.599 ************************************ 00:07:57.599 END TEST raid1_resize_superblock_test 00:07:57.599 ************************************ 00:07:57.599 15:17:03 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.599 15:17:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:57.599 15:17:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:57.599 15:17:03 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:57.599 15:17:03 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:57.599 15:17:03 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:57.599 15:17:03 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:57.599 15:17:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:57.599 15:17:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.599 15:17:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.599 ************************************ 00:07:57.599 START TEST raid_function_test_raid0 00:07:57.599 ************************************ 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1127 -- # raid_function_test raid0 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:57.599 Process raid pid: 73164 00:07:57.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=73164 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73164' 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 73164 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@833 -- # '[' -z 73164 ']' 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.599 15:17:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:57.599 [2024-11-10 15:17:03.866407] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:07:57.599 [2024-11-10 15:17:03.866545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.858 [2024-11-10 15:17:04.006112] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.858 [2024-11-10 15:17:04.043293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.858 [2024-11-10 15:17:04.081311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.858 [2024-11-10 15:17:04.157111] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.858 [2024-11-10 15:17:04.157149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@866 -- # return 0 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:58.427 Base_1 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:58.427 Base_2 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:58.427 [2024-11-10 15:17:04.723720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:58.427 [2024-11-10 15:17:04.725893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:58.427 [2024-11-10 15:17:04.725998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:07:58.427 [2024-11-10 15:17:04.726045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:58.427 [2024-11-10 15:17:04.726327] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:07:58.427 [2024-11-10 15:17:04.726491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:07:58.427 [2024-11-10 15:17:04.726533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:07:58.427 [2024-11-10 15:17:04.726709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:58.427 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:58.688 [2024-11-10 15:17:04.963831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:07:58.688 /dev/nbd0 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # local i 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:58.688 15:17:04 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # break 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.688 1+0 records in 00:07:58.688 1+0 records out 00:07:58.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300055 s, 13.7 MB/s 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # size=4096 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # return 0 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:58.688 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:58.947 { 00:07:58.947 "nbd_device": "/dev/nbd0", 00:07:58.947 "bdev_name": "raid" 00:07:58.947 } 00:07:58.947 ]' 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:58.947 { 00:07:58.947 "nbd_device": "/dev/nbd0", 00:07:58.947 "bdev_name": "raid" 00:07:58.947 } 00:07:58.947 ]' 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:58.947 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:59.207 4096+0 records in 00:07:59.207 4096+0 records out 00:07:59.207 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0349434 s, 60.0 MB/s 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:59.207 4096+0 records in 00:07:59.207 4096+0 records out 00:07:59.207 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.207229 s, 10.1 MB/s 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:59.207 128+0 records in 00:07:59.207 128+0 records out 00:07:59.207 65536 bytes (66 kB, 64 KiB) copied, 0.00126272 s, 51.9 MB/s 00:07:59.207 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:59.467 2035+0 records in 00:07:59.467 2035+0 records out 00:07:59.467 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0141103 s, 73.8 MB/s 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:59.467 456+0 records in 00:07:59.467 456+0 records out 00:07:59.467 233472 bytes (233 kB, 228 KiB) copied, 0.00390129 s, 59.8 MB/s 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.467 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.726 [2024-11-10 15:17:05.865711] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:59.726 15:17:05 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:59.726 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:59.726 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.726 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 73164 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@952 -- # '[' -z 73164 ']' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # kill -0 73164 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # uname 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73164 00:07:59.986 killing process with pid 73164 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73164' 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@971 -- # kill 73164 00:07:59.986 [2024-11-10 15:17:06.164955] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:59.986 [2024-11-10 15:17:06.165086] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.986 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@976 -- # wait 73164 00:07:59.986 [2024-11-10 15:17:06.165150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.986 [2024-11-10 15:17:06.165169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:07:59.986 [2024-11-10 15:17:06.207648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.245 15:17:06 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:00.245 00:08:00.245 real 0m2.762s 00:08:00.245 user 0m3.224s 00:08:00.245 sys 0m1.034s 00:08:00.245 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.245 15:17:06 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:00.245 ************************************ 00:08:00.245 END TEST raid_function_test_raid0 00:08:00.245 ************************************ 00:08:00.245 15:17:06 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:00.245 15:17:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:00.245 15:17:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.245 15:17:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.505 ************************************ 00:08:00.505 START TEST raid_function_test_concat 00:08:00.505 ************************************ 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1127 -- # raid_function_test concat 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=73277 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:00.505 Process raid pid: 73277 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 73277' 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 73277 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@833 -- # '[' -z 73277 ']' 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.505 15:17:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:00.505 [2024-11-10 15:17:06.696315] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:00.505 [2024-11-10 15:17:06.696450] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.505 [2024-11-10 15:17:06.834521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.505 [2024-11-10 15:17:06.853201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.764 [2024-11-10 15:17:06.891954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.765 [2024-11-10 15:17:06.967994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.765 [2024-11-10 15:17:06.968041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@866 -- # return 0 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 Base_1 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 Base_2 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 [2024-11-10 15:17:07.571307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:01.334 [2024-11-10 15:17:07.573413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:01.334 [2024-11-10 15:17:07.573495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:01.334 [2024-11-10 15:17:07.573507] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:01.334 [2024-11-10 15:17:07.573779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:01.334 [2024-11-10 15:17:07.573903] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:01.334 [2024-11-10 15:17:07.573922] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007400 00:08:01.334 [2024-11-10 15:17:07.574093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:01.334 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:01.593 [2024-11-10 15:17:07.803456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:01.593 /dev/nbd0 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # local i 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # break 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:01.593 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.594 1+0 records in 00:08:01.594 1+0 records out 00:08:01.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428477 s, 9.6 MB/s 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # size=4096 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # return 0 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:01.594 15:17:07 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:01.853 { 00:08:01.853 "nbd_device": "/dev/nbd0", 00:08:01.853 "bdev_name": "raid" 00:08:01.853 } 00:08:01.853 ]' 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:01.853 { 00:08:01.853 "nbd_device": "/dev/nbd0", 00:08:01.853 "bdev_name": "raid" 00:08:01.853 } 00:08:01.853 ]' 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:01.853 4096+0 records in 00:08:01.853 4096+0 records out 00:08:01.853 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0319589 s, 65.6 MB/s 00:08:01.853 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:02.112 4096+0 records in 00:08:02.112 4096+0 records out 00:08:02.112 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.217389 s, 9.6 MB/s 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:02.112 128+0 records in 00:08:02.112 128+0 records out 00:08:02.112 65536 bytes (66 kB, 64 KiB) copied, 0.00115647 s, 56.7 MB/s 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:02.112 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:02.372 2035+0 records in 00:08:02.372 2035+0 records out 00:08:02.372 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0144034 s, 72.3 MB/s 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:02.372 456+0 records in 00:08:02.372 456+0 records out 00:08:02.372 233472 bytes (233 kB, 228 KiB) copied, 0.00359077 s, 65.0 MB/s 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.372 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.632 [2024-11-10 15:17:08.738045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.632 15:17:08 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 73277 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@952 -- # '[' -z 73277 ']' 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # kill -0 73277 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # uname 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73277 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:02.903 killing process with pid 73277 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73277' 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@971 -- # kill 73277 00:08:02.903 [2024-11-10 15:17:09.056774] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.903 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@976 -- # wait 73277 00:08:02.903 [2024-11-10 15:17:09.056941] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.903 [2024-11-10 15:17:09.057041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.903 [2024-11-10 15:17:09.057060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid, state offline 00:08:02.903 [2024-11-10 15:17:09.100325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.205 15:17:09 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:03.205 00:08:03.205 real 0m2.823s 00:08:03.205 user 0m3.375s 00:08:03.205 sys 0m0.990s 00:08:03.205 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.205 15:17:09 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 ************************************ 00:08:03.205 END TEST raid_function_test_concat 00:08:03.205 ************************************ 00:08:03.205 15:17:09 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:03.205 15:17:09 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:03.205 15:17:09 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.205 15:17:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.205 ************************************ 00:08:03.205 START TEST raid0_resize_test 00:08:03.205 ************************************ 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 0 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73394 00:08:03.205 Process raid pid: 73394 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73394' 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73394 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # '[' -z 73394 ']' 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.205 15:17:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.475 [2024-11-10 15:17:09.594381] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:03.476 [2024-11-10 15:17:09.594506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.476 [2024-11-10 15:17:09.731700] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.476 [2024-11-10 15:17:09.756645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.476 [2024-11-10 15:17:09.797386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.745 [2024-11-10 15:17:09.873813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.745 [2024-11-10 15:17:09.873850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 Base_1 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 Base_2 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 [2024-11-10 15:17:10.442326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:04.313 [2024-11-10 15:17:10.444422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:04.313 [2024-11-10 15:17:10.444495] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:04.313 [2024-11-10 15:17:10.444505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:04.313 [2024-11-10 15:17:10.444807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:04.313 [2024-11-10 15:17:10.444925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:04.313 [2024-11-10 15:17:10.444939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:08:04.313 [2024-11-10 15:17:10.445118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 [2024-11-10 15:17:10.454307] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:04.313 [2024-11-10 15:17:10.454342] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:04.313 true 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 [2024-11-10 15:17:10.470547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 [2024-11-10 15:17:10.514384] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:04.313 [2024-11-10 15:17:10.514430] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:04.313 [2024-11-10 15:17:10.514467] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:04.313 true 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.313 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:04.314 [2024-11-10 15:17:10.526558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73394 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # '[' -z 73394 ']' 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # kill -0 73394 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73394 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.314 killing process with pid 73394 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73394' 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@971 -- # kill 73394 00:08:04.314 [2024-11-10 15:17:10.596813] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:04.314 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@976 -- # wait 73394 00:08:04.314 [2024-11-10 15:17:10.596985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.314 [2024-11-10 15:17:10.597063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.314 [2024-11-10 15:17:10.597079] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:08:04.314 [2024-11-10 15:17:10.599312] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:04.573 15:17:10 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:04.573 00:08:04.573 real 0m1.427s 00:08:04.573 user 0m1.506s 00:08:04.573 sys 0m0.374s 00:08:04.573 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.573 15:17:10 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.573 ************************************ 00:08:04.573 END TEST raid0_resize_test 00:08:04.573 ************************************ 00:08:04.832 15:17:10 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:04.832 15:17:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:04.832 15:17:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.832 15:17:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:04.832 ************************************ 00:08:04.832 START TEST raid1_resize_test 00:08:04.832 ************************************ 00:08:04.832 15:17:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1127 -- # raid_resize_test 1 00:08:04.832 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:04.832 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:04.832 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:04.832 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:04.833 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:04.833 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:04.833 15:17:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=73439 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 73439' 00:08:04.833 Process raid pid: 73439 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 73439 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@833 -- # '[' -z 73439 ']' 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:04.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:04.833 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.833 [2024-11-10 15:17:11.084507] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:04.833 [2024-11-10 15:17:11.084626] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:05.092 [2024-11-10 15:17:11.217513] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.092 [2024-11-10 15:17:11.243001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.092 [2024-11-10 15:17:11.281553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.092 [2024-11-10 15:17:11.357293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.092 [2024-11-10 15:17:11.357327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@866 -- # return 0 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 Base_1 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 Base_2 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 [2024-11-10 15:17:11.933873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:05.660 [2024-11-10 15:17:11.935942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:05.660 [2024-11-10 15:17:11.936001] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:05.660 [2024-11-10 15:17:11.936020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:05.660 [2024-11-10 15:17:11.936281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:05.660 [2024-11-10 15:17:11.936402] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:05.660 [2024-11-10 15:17:11.936417] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007400 00:08:05.660 [2024-11-10 15:17:11.936524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 [2024-11-10 15:17:11.945860] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:05.660 [2024-11-10 15:17:11.945892] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:05.660 true 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 [2024-11-10 15:17:11.962152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:05.660 15:17:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 [2024-11-10 15:17:12.005959] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:05.660 [2024-11-10 15:17:12.006024] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:05.660 [2024-11-10 15:17:12.006061] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:05.660 true 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.660 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:05.660 [2024-11-10 15:17:12.018097] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 73439 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@952 -- # '[' -z 73439 ']' 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # kill -0 73439 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # uname 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73439 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:05.920 killing process with pid 73439 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73439' 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@971 -- # kill 73439 00:08:05.920 [2024-11-10 15:17:12.108370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.920 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@976 -- # wait 73439 00:08:05.920 [2024-11-10 15:17:12.108543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:05.920 [2024-11-10 15:17:12.109085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:05.920 [2024-11-10 15:17:12.109110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Raid, state offline 00:08:05.920 [2024-11-10 15:17:12.110866] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.179 15:17:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:06.179 00:08:06.179 real 0m1.439s 00:08:06.179 user 0m1.542s 00:08:06.179 sys 0m0.359s 00:08:06.179 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.179 15:17:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.179 ************************************ 00:08:06.179 END TEST raid1_resize_test 00:08:06.179 ************************************ 00:08:06.179 15:17:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:06.179 15:17:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:06.179 15:17:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:06.179 15:17:12 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:06.179 15:17:12 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.179 15:17:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.179 ************************************ 00:08:06.179 START TEST raid_state_function_test 00:08:06.179 ************************************ 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 false 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:06.179 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73496 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73496' 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.180 Process raid pid: 73496 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73496 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 73496 ']' 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:06.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:06.180 15:17:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.439 [2024-11-10 15:17:12.603846] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:06.439 [2024-11-10 15:17:12.603970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.439 [2024-11-10 15:17:12.743057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.439 [2024-11-10 15:17:12.780922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.697 [2024-11-10 15:17:12.820299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.697 [2024-11-10 15:17:12.896960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.698 [2024-11-10 15:17:12.897004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.266 [2024-11-10 15:17:13.417480] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.266 [2024-11-10 15:17:13.417542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.266 [2024-11-10 15:17:13.417562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.266 [2024-11-10 15:17:13.417571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.266 "name": "Existed_Raid", 00:08:07.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.266 "strip_size_kb": 64, 00:08:07.266 "state": "configuring", 00:08:07.266 "raid_level": "raid0", 00:08:07.266 "superblock": false, 00:08:07.266 "num_base_bdevs": 2, 00:08:07.266 "num_base_bdevs_discovered": 0, 00:08:07.266 "num_base_bdevs_operational": 2, 00:08:07.266 "base_bdevs_list": [ 00:08:07.266 { 00:08:07.266 "name": "BaseBdev1", 00:08:07.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.266 "is_configured": false, 00:08:07.266 "data_offset": 0, 00:08:07.266 "data_size": 0 00:08:07.266 }, 00:08:07.266 { 00:08:07.266 "name": "BaseBdev2", 00:08:07.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.266 "is_configured": false, 00:08:07.266 "data_offset": 0, 00:08:07.266 "data_size": 0 00:08:07.266 } 00:08:07.266 ] 00:08:07.266 }' 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.266 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.526 [2024-11-10 15:17:13.877549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:07.526 [2024-11-10 15:17:13.877607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.526 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.785 [2024-11-10 15:17:13.889651] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.785 [2024-11-10 15:17:13.889733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.785 [2024-11-10 15:17:13.889751] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:07.785 [2024-11-10 15:17:13.889764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:07.785 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.785 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:07.785 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.785 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.785 [2024-11-10 15:17:13.917199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.786 BaseBdev1 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.786 [ 00:08:07.786 { 00:08:07.786 "name": "BaseBdev1", 00:08:07.786 "aliases": [ 00:08:07.786 "03fc0f7e-9a83-4798-8c39-712bb1e6ca14" 00:08:07.786 ], 00:08:07.786 "product_name": "Malloc disk", 00:08:07.786 "block_size": 512, 00:08:07.786 "num_blocks": 65536, 00:08:07.786 "uuid": "03fc0f7e-9a83-4798-8c39-712bb1e6ca14", 00:08:07.786 "assigned_rate_limits": { 00:08:07.786 "rw_ios_per_sec": 0, 00:08:07.786 "rw_mbytes_per_sec": 0, 00:08:07.786 "r_mbytes_per_sec": 0, 00:08:07.786 "w_mbytes_per_sec": 0 00:08:07.786 }, 00:08:07.786 "claimed": true, 00:08:07.786 "claim_type": "exclusive_write", 00:08:07.786 "zoned": false, 00:08:07.786 "supported_io_types": { 00:08:07.786 "read": true, 00:08:07.786 "write": true, 00:08:07.786 "unmap": true, 00:08:07.786 "flush": true, 00:08:07.786 "reset": true, 00:08:07.786 "nvme_admin": false, 00:08:07.786 "nvme_io": false, 00:08:07.786 "nvme_io_md": false, 00:08:07.786 "write_zeroes": true, 00:08:07.786 "zcopy": true, 00:08:07.786 "get_zone_info": false, 00:08:07.786 "zone_management": false, 00:08:07.786 "zone_append": false, 00:08:07.786 "compare": false, 00:08:07.786 "compare_and_write": false, 00:08:07.786 "abort": true, 00:08:07.786 "seek_hole": false, 00:08:07.786 "seek_data": false, 00:08:07.786 "copy": true, 00:08:07.786 "nvme_iov_md": false 00:08:07.786 }, 00:08:07.786 "memory_domains": [ 00:08:07.786 { 00:08:07.786 "dma_device_id": "system", 00:08:07.786 "dma_device_type": 1 00:08:07.786 }, 00:08:07.786 { 00:08:07.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.786 "dma_device_type": 2 00:08:07.786 } 00:08:07.786 ], 00:08:07.786 "driver_specific": {} 00:08:07.786 } 00:08:07.786 ] 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.786 "name": "Existed_Raid", 00:08:07.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.786 "strip_size_kb": 64, 00:08:07.786 "state": "configuring", 00:08:07.786 "raid_level": "raid0", 00:08:07.786 "superblock": false, 00:08:07.786 "num_base_bdevs": 2, 00:08:07.786 "num_base_bdevs_discovered": 1, 00:08:07.786 "num_base_bdevs_operational": 2, 00:08:07.786 "base_bdevs_list": [ 00:08:07.786 { 00:08:07.786 "name": "BaseBdev1", 00:08:07.786 "uuid": "03fc0f7e-9a83-4798-8c39-712bb1e6ca14", 00:08:07.786 "is_configured": true, 00:08:07.786 "data_offset": 0, 00:08:07.786 "data_size": 65536 00:08:07.786 }, 00:08:07.786 { 00:08:07.786 "name": "BaseBdev2", 00:08:07.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.786 "is_configured": false, 00:08:07.786 "data_offset": 0, 00:08:07.786 "data_size": 0 00:08:07.786 } 00:08:07.786 ] 00:08:07.786 }' 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.786 15:17:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.046 [2024-11-10 15:17:14.357325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.046 [2024-11-10 15:17:14.357399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.046 [2024-11-10 15:17:14.365375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.046 [2024-11-10 15:17:14.367541] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.046 [2024-11-10 15:17:14.367581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.046 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.305 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.305 "name": "Existed_Raid", 00:08:08.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.305 "strip_size_kb": 64, 00:08:08.305 "state": "configuring", 00:08:08.305 "raid_level": "raid0", 00:08:08.305 "superblock": false, 00:08:08.305 "num_base_bdevs": 2, 00:08:08.305 "num_base_bdevs_discovered": 1, 00:08:08.305 "num_base_bdevs_operational": 2, 00:08:08.305 "base_bdevs_list": [ 00:08:08.305 { 00:08:08.305 "name": "BaseBdev1", 00:08:08.305 "uuid": "03fc0f7e-9a83-4798-8c39-712bb1e6ca14", 00:08:08.305 "is_configured": true, 00:08:08.305 "data_offset": 0, 00:08:08.305 "data_size": 65536 00:08:08.305 }, 00:08:08.305 { 00:08:08.305 "name": "BaseBdev2", 00:08:08.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.305 "is_configured": false, 00:08:08.305 "data_offset": 0, 00:08:08.305 "data_size": 0 00:08:08.305 } 00:08:08.305 ] 00:08:08.305 }' 00:08:08.305 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.305 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 [2024-11-10 15:17:14.786889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:08.565 [2024-11-10 15:17:14.786935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:08.565 [2024-11-10 15:17:14.786948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:08.565 [2024-11-10 15:17:14.787300] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:08.565 [2024-11-10 15:17:14.787478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:08.565 [2024-11-10 15:17:14.787494] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:08.565 [2024-11-10 15:17:14.787730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.565 BaseBdev2 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 [ 00:08:08.565 { 00:08:08.565 "name": "BaseBdev2", 00:08:08.565 "aliases": [ 00:08:08.565 "63db27fa-a9ab-45ee-b948-88beca22baa8" 00:08:08.565 ], 00:08:08.565 "product_name": "Malloc disk", 00:08:08.565 "block_size": 512, 00:08:08.565 "num_blocks": 65536, 00:08:08.565 "uuid": "63db27fa-a9ab-45ee-b948-88beca22baa8", 00:08:08.565 "assigned_rate_limits": { 00:08:08.565 "rw_ios_per_sec": 0, 00:08:08.565 "rw_mbytes_per_sec": 0, 00:08:08.565 "r_mbytes_per_sec": 0, 00:08:08.565 "w_mbytes_per_sec": 0 00:08:08.565 }, 00:08:08.565 "claimed": true, 00:08:08.565 "claim_type": "exclusive_write", 00:08:08.565 "zoned": false, 00:08:08.565 "supported_io_types": { 00:08:08.565 "read": true, 00:08:08.565 "write": true, 00:08:08.565 "unmap": true, 00:08:08.565 "flush": true, 00:08:08.565 "reset": true, 00:08:08.565 "nvme_admin": false, 00:08:08.565 "nvme_io": false, 00:08:08.565 "nvme_io_md": false, 00:08:08.565 "write_zeroes": true, 00:08:08.565 "zcopy": true, 00:08:08.565 "get_zone_info": false, 00:08:08.565 "zone_management": false, 00:08:08.565 "zone_append": false, 00:08:08.565 "compare": false, 00:08:08.565 "compare_and_write": false, 00:08:08.565 "abort": true, 00:08:08.565 "seek_hole": false, 00:08:08.565 "seek_data": false, 00:08:08.565 "copy": true, 00:08:08.565 "nvme_iov_md": false 00:08:08.565 }, 00:08:08.565 "memory_domains": [ 00:08:08.565 { 00:08:08.565 "dma_device_id": "system", 00:08:08.565 "dma_device_type": 1 00:08:08.565 }, 00:08:08.565 { 00:08:08.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.565 "dma_device_type": 2 00:08:08.565 } 00:08:08.565 ], 00:08:08.565 "driver_specific": {} 00:08:08.565 } 00:08:08.565 ] 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.565 "name": "Existed_Raid", 00:08:08.565 "uuid": "6f73c4d3-63ad-42b3-a6e3-c590a3aa07df", 00:08:08.565 "strip_size_kb": 64, 00:08:08.565 "state": "online", 00:08:08.565 "raid_level": "raid0", 00:08:08.565 "superblock": false, 00:08:08.565 "num_base_bdevs": 2, 00:08:08.565 "num_base_bdevs_discovered": 2, 00:08:08.565 "num_base_bdevs_operational": 2, 00:08:08.565 "base_bdevs_list": [ 00:08:08.565 { 00:08:08.565 "name": "BaseBdev1", 00:08:08.565 "uuid": "03fc0f7e-9a83-4798-8c39-712bb1e6ca14", 00:08:08.565 "is_configured": true, 00:08:08.565 "data_offset": 0, 00:08:08.565 "data_size": 65536 00:08:08.565 }, 00:08:08.565 { 00:08:08.565 "name": "BaseBdev2", 00:08:08.565 "uuid": "63db27fa-a9ab-45ee-b948-88beca22baa8", 00:08:08.565 "is_configured": true, 00:08:08.565 "data_offset": 0, 00:08:08.565 "data_size": 65536 00:08:08.565 } 00:08:08.565 ] 00:08:08.565 }' 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.565 15:17:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.134 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.135 [2024-11-10 15:17:15.259446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.135 "name": "Existed_Raid", 00:08:09.135 "aliases": [ 00:08:09.135 "6f73c4d3-63ad-42b3-a6e3-c590a3aa07df" 00:08:09.135 ], 00:08:09.135 "product_name": "Raid Volume", 00:08:09.135 "block_size": 512, 00:08:09.135 "num_blocks": 131072, 00:08:09.135 "uuid": "6f73c4d3-63ad-42b3-a6e3-c590a3aa07df", 00:08:09.135 "assigned_rate_limits": { 00:08:09.135 "rw_ios_per_sec": 0, 00:08:09.135 "rw_mbytes_per_sec": 0, 00:08:09.135 "r_mbytes_per_sec": 0, 00:08:09.135 "w_mbytes_per_sec": 0 00:08:09.135 }, 00:08:09.135 "claimed": false, 00:08:09.135 "zoned": false, 00:08:09.135 "supported_io_types": { 00:08:09.135 "read": true, 00:08:09.135 "write": true, 00:08:09.135 "unmap": true, 00:08:09.135 "flush": true, 00:08:09.135 "reset": true, 00:08:09.135 "nvme_admin": false, 00:08:09.135 "nvme_io": false, 00:08:09.135 "nvme_io_md": false, 00:08:09.135 "write_zeroes": true, 00:08:09.135 "zcopy": false, 00:08:09.135 "get_zone_info": false, 00:08:09.135 "zone_management": false, 00:08:09.135 "zone_append": false, 00:08:09.135 "compare": false, 00:08:09.135 "compare_and_write": false, 00:08:09.135 "abort": false, 00:08:09.135 "seek_hole": false, 00:08:09.135 "seek_data": false, 00:08:09.135 "copy": false, 00:08:09.135 "nvme_iov_md": false 00:08:09.135 }, 00:08:09.135 "memory_domains": [ 00:08:09.135 { 00:08:09.135 "dma_device_id": "system", 00:08:09.135 "dma_device_type": 1 00:08:09.135 }, 00:08:09.135 { 00:08:09.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.135 "dma_device_type": 2 00:08:09.135 }, 00:08:09.135 { 00:08:09.135 "dma_device_id": "system", 00:08:09.135 "dma_device_type": 1 00:08:09.135 }, 00:08:09.135 { 00:08:09.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.135 "dma_device_type": 2 00:08:09.135 } 00:08:09.135 ], 00:08:09.135 "driver_specific": { 00:08:09.135 "raid": { 00:08:09.135 "uuid": "6f73c4d3-63ad-42b3-a6e3-c590a3aa07df", 00:08:09.135 "strip_size_kb": 64, 00:08:09.135 "state": "online", 00:08:09.135 "raid_level": "raid0", 00:08:09.135 "superblock": false, 00:08:09.135 "num_base_bdevs": 2, 00:08:09.135 "num_base_bdevs_discovered": 2, 00:08:09.135 "num_base_bdevs_operational": 2, 00:08:09.135 "base_bdevs_list": [ 00:08:09.135 { 00:08:09.135 "name": "BaseBdev1", 00:08:09.135 "uuid": "03fc0f7e-9a83-4798-8c39-712bb1e6ca14", 00:08:09.135 "is_configured": true, 00:08:09.135 "data_offset": 0, 00:08:09.135 "data_size": 65536 00:08:09.135 }, 00:08:09.135 { 00:08:09.135 "name": "BaseBdev2", 00:08:09.135 "uuid": "63db27fa-a9ab-45ee-b948-88beca22baa8", 00:08:09.135 "is_configured": true, 00:08:09.135 "data_offset": 0, 00:08:09.135 "data_size": 65536 00:08:09.135 } 00:08:09.135 ] 00:08:09.135 } 00:08:09.135 } 00:08:09.135 }' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:09.135 BaseBdev2' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.135 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.135 [2024-11-10 15:17:15.495345] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.135 [2024-11-10 15:17:15.495398] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.135 [2024-11-10 15:17:15.495486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.395 "name": "Existed_Raid", 00:08:09.395 "uuid": "6f73c4d3-63ad-42b3-a6e3-c590a3aa07df", 00:08:09.395 "strip_size_kb": 64, 00:08:09.395 "state": "offline", 00:08:09.395 "raid_level": "raid0", 00:08:09.395 "superblock": false, 00:08:09.395 "num_base_bdevs": 2, 00:08:09.395 "num_base_bdevs_discovered": 1, 00:08:09.395 "num_base_bdevs_operational": 1, 00:08:09.395 "base_bdevs_list": [ 00:08:09.395 { 00:08:09.395 "name": null, 00:08:09.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.395 "is_configured": false, 00:08:09.395 "data_offset": 0, 00:08:09.395 "data_size": 65536 00:08:09.395 }, 00:08:09.395 { 00:08:09.395 "name": "BaseBdev2", 00:08:09.395 "uuid": "63db27fa-a9ab-45ee-b948-88beca22baa8", 00:08:09.395 "is_configured": true, 00:08:09.395 "data_offset": 0, 00:08:09.395 "data_size": 65536 00:08:09.395 } 00:08:09.395 ] 00:08:09.395 }' 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.395 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.654 15:17:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.654 [2024-11-10 15:17:15.996390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:09.654 [2024-11-10 15:17:15.996462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73496 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 73496 ']' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 73496 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73496 00:08:09.913 killing process with pid 73496 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73496' 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 73496 00:08:09.913 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 73496 00:08:09.913 [2024-11-10 15:17:16.095706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.913 [2024-11-10 15:17:16.097270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.172 15:17:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:10.172 00:08:10.172 real 0m3.906s 00:08:10.172 user 0m5.982s 00:08:10.172 sys 0m0.847s 00:08:10.172 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.172 15:17:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.172 ************************************ 00:08:10.172 END TEST raid_state_function_test 00:08:10.172 ************************************ 00:08:10.172 15:17:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:10.172 15:17:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:10.172 15:17:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.172 15:17:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.172 ************************************ 00:08:10.172 START TEST raid_state_function_test_sb 00:08:10.173 ************************************ 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 2 true 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73732 00:08:10.173 Process raid pid: 73732 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73732' 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73732 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 73732 ']' 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:10.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.173 15:17:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.432 [2024-11-10 15:17:16.575794] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:10.432 [2024-11-10 15:17:16.575934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.432 [2024-11-10 15:17:16.716409] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.432 [2024-11-10 15:17:16.753176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.692 [2024-11-10 15:17:16.795530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.692 [2024-11-10 15:17:16.871776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.692 [2024-11-10 15:17:16.871816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.259 [2024-11-10 15:17:17.380850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.259 [2024-11-10 15:17:17.380913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.259 [2024-11-10 15:17:17.380927] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.259 [2024-11-10 15:17:17.380935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.259 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.259 "name": "Existed_Raid", 00:08:11.259 "uuid": "b583cf52-63a7-4a42-9faa-b70d9085e580", 00:08:11.259 "strip_size_kb": 64, 00:08:11.259 "state": "configuring", 00:08:11.259 "raid_level": "raid0", 00:08:11.259 "superblock": true, 00:08:11.259 "num_base_bdevs": 2, 00:08:11.259 "num_base_bdevs_discovered": 0, 00:08:11.259 "num_base_bdevs_operational": 2, 00:08:11.259 "base_bdevs_list": [ 00:08:11.259 { 00:08:11.259 "name": "BaseBdev1", 00:08:11.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.259 "is_configured": false, 00:08:11.259 "data_offset": 0, 00:08:11.259 "data_size": 0 00:08:11.259 }, 00:08:11.260 { 00:08:11.260 "name": "BaseBdev2", 00:08:11.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.260 "is_configured": false, 00:08:11.260 "data_offset": 0, 00:08:11.260 "data_size": 0 00:08:11.260 } 00:08:11.260 ] 00:08:11.260 }' 00:08:11.260 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.260 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.519 [2024-11-10 15:17:17.800876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.519 [2024-11-10 15:17:17.800935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.519 [2024-11-10 15:17:17.808887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.519 [2024-11-10 15:17:17.808928] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.519 [2024-11-10 15:17:17.808940] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.519 [2024-11-10 15:17:17.808951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.519 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.520 [2024-11-10 15:17:17.832033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.520 BaseBdev1 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.520 [ 00:08:11.520 { 00:08:11.520 "name": "BaseBdev1", 00:08:11.520 "aliases": [ 00:08:11.520 "1b8ab275-334c-4b75-b395-09ba967a49cf" 00:08:11.520 ], 00:08:11.520 "product_name": "Malloc disk", 00:08:11.520 "block_size": 512, 00:08:11.520 "num_blocks": 65536, 00:08:11.520 "uuid": "1b8ab275-334c-4b75-b395-09ba967a49cf", 00:08:11.520 "assigned_rate_limits": { 00:08:11.520 "rw_ios_per_sec": 0, 00:08:11.520 "rw_mbytes_per_sec": 0, 00:08:11.520 "r_mbytes_per_sec": 0, 00:08:11.520 "w_mbytes_per_sec": 0 00:08:11.520 }, 00:08:11.520 "claimed": true, 00:08:11.520 "claim_type": "exclusive_write", 00:08:11.520 "zoned": false, 00:08:11.520 "supported_io_types": { 00:08:11.520 "read": true, 00:08:11.520 "write": true, 00:08:11.520 "unmap": true, 00:08:11.520 "flush": true, 00:08:11.520 "reset": true, 00:08:11.520 "nvme_admin": false, 00:08:11.520 "nvme_io": false, 00:08:11.520 "nvme_io_md": false, 00:08:11.520 "write_zeroes": true, 00:08:11.520 "zcopy": true, 00:08:11.520 "get_zone_info": false, 00:08:11.520 "zone_management": false, 00:08:11.520 "zone_append": false, 00:08:11.520 "compare": false, 00:08:11.520 "compare_and_write": false, 00:08:11.520 "abort": true, 00:08:11.520 "seek_hole": false, 00:08:11.520 "seek_data": false, 00:08:11.520 "copy": true, 00:08:11.520 "nvme_iov_md": false 00:08:11.520 }, 00:08:11.520 "memory_domains": [ 00:08:11.520 { 00:08:11.520 "dma_device_id": "system", 00:08:11.520 "dma_device_type": 1 00:08:11.520 }, 00:08:11.520 { 00:08:11.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.520 "dma_device_type": 2 00:08:11.520 } 00:08:11.520 ], 00:08:11.520 "driver_specific": {} 00:08:11.520 } 00:08:11.520 ] 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.520 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.780 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.780 "name": "Existed_Raid", 00:08:11.780 "uuid": "00da19ce-0491-414d-8447-7a8d4e507d46", 00:08:11.780 "strip_size_kb": 64, 00:08:11.780 "state": "configuring", 00:08:11.780 "raid_level": "raid0", 00:08:11.780 "superblock": true, 00:08:11.780 "num_base_bdevs": 2, 00:08:11.780 "num_base_bdevs_discovered": 1, 00:08:11.780 "num_base_bdevs_operational": 2, 00:08:11.780 "base_bdevs_list": [ 00:08:11.780 { 00:08:11.780 "name": "BaseBdev1", 00:08:11.780 "uuid": "1b8ab275-334c-4b75-b395-09ba967a49cf", 00:08:11.780 "is_configured": true, 00:08:11.780 "data_offset": 2048, 00:08:11.780 "data_size": 63488 00:08:11.780 }, 00:08:11.780 { 00:08:11.780 "name": "BaseBdev2", 00:08:11.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.780 "is_configured": false, 00:08:11.780 "data_offset": 0, 00:08:11.780 "data_size": 0 00:08:11.780 } 00:08:11.780 ] 00:08:11.780 }' 00:08:11.780 15:17:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.780 15:17:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.039 [2024-11-10 15:17:18.272239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.039 [2024-11-10 15:17:18.272325] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.039 [2024-11-10 15:17:18.284269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.039 [2024-11-10 15:17:18.286506] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.039 [2024-11-10 15:17:18.286545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:12.039 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.040 "name": "Existed_Raid", 00:08:12.040 "uuid": "372b3649-9449-4d55-b025-7395bef38922", 00:08:12.040 "strip_size_kb": 64, 00:08:12.040 "state": "configuring", 00:08:12.040 "raid_level": "raid0", 00:08:12.040 "superblock": true, 00:08:12.040 "num_base_bdevs": 2, 00:08:12.040 "num_base_bdevs_discovered": 1, 00:08:12.040 "num_base_bdevs_operational": 2, 00:08:12.040 "base_bdevs_list": [ 00:08:12.040 { 00:08:12.040 "name": "BaseBdev1", 00:08:12.040 "uuid": "1b8ab275-334c-4b75-b395-09ba967a49cf", 00:08:12.040 "is_configured": true, 00:08:12.040 "data_offset": 2048, 00:08:12.040 "data_size": 63488 00:08:12.040 }, 00:08:12.040 { 00:08:12.040 "name": "BaseBdev2", 00:08:12.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.040 "is_configured": false, 00:08:12.040 "data_offset": 0, 00:08:12.040 "data_size": 0 00:08:12.040 } 00:08:12.040 ] 00:08:12.040 }' 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.040 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.609 [2024-11-10 15:17:18.749089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.609 [2024-11-10 15:17:18.749334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:12.609 [2024-11-10 15:17:18.749358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:12.609 [2024-11-10 15:17:18.749683] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:12.609 [2024-11-10 15:17:18.749844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:12.609 [2024-11-10 15:17:18.749868] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:12.609 BaseBdev2 00:08:12.609 [2024-11-10 15:17:18.749991] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.609 [ 00:08:12.609 { 00:08:12.609 "name": "BaseBdev2", 00:08:12.609 "aliases": [ 00:08:12.609 "6f73290d-18ef-40ed-a585-76063187dea6" 00:08:12.609 ], 00:08:12.609 "product_name": "Malloc disk", 00:08:12.609 "block_size": 512, 00:08:12.609 "num_blocks": 65536, 00:08:12.609 "uuid": "6f73290d-18ef-40ed-a585-76063187dea6", 00:08:12.609 "assigned_rate_limits": { 00:08:12.609 "rw_ios_per_sec": 0, 00:08:12.609 "rw_mbytes_per_sec": 0, 00:08:12.609 "r_mbytes_per_sec": 0, 00:08:12.609 "w_mbytes_per_sec": 0 00:08:12.609 }, 00:08:12.609 "claimed": true, 00:08:12.609 "claim_type": "exclusive_write", 00:08:12.609 "zoned": false, 00:08:12.609 "supported_io_types": { 00:08:12.609 "read": true, 00:08:12.609 "write": true, 00:08:12.609 "unmap": true, 00:08:12.609 "flush": true, 00:08:12.609 "reset": true, 00:08:12.609 "nvme_admin": false, 00:08:12.609 "nvme_io": false, 00:08:12.609 "nvme_io_md": false, 00:08:12.609 "write_zeroes": true, 00:08:12.609 "zcopy": true, 00:08:12.609 "get_zone_info": false, 00:08:12.609 "zone_management": false, 00:08:12.609 "zone_append": false, 00:08:12.609 "compare": false, 00:08:12.609 "compare_and_write": false, 00:08:12.609 "abort": true, 00:08:12.609 "seek_hole": false, 00:08:12.609 "seek_data": false, 00:08:12.609 "copy": true, 00:08:12.609 "nvme_iov_md": false 00:08:12.609 }, 00:08:12.609 "memory_domains": [ 00:08:12.609 { 00:08:12.609 "dma_device_id": "system", 00:08:12.609 "dma_device_type": 1 00:08:12.609 }, 00:08:12.609 { 00:08:12.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.609 "dma_device_type": 2 00:08:12.609 } 00:08:12.609 ], 00:08:12.609 "driver_specific": {} 00:08:12.609 } 00:08:12.609 ] 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.609 "name": "Existed_Raid", 00:08:12.609 "uuid": "372b3649-9449-4d55-b025-7395bef38922", 00:08:12.609 "strip_size_kb": 64, 00:08:12.609 "state": "online", 00:08:12.609 "raid_level": "raid0", 00:08:12.609 "superblock": true, 00:08:12.609 "num_base_bdevs": 2, 00:08:12.609 "num_base_bdevs_discovered": 2, 00:08:12.609 "num_base_bdevs_operational": 2, 00:08:12.609 "base_bdevs_list": [ 00:08:12.609 { 00:08:12.609 "name": "BaseBdev1", 00:08:12.609 "uuid": "1b8ab275-334c-4b75-b395-09ba967a49cf", 00:08:12.609 "is_configured": true, 00:08:12.609 "data_offset": 2048, 00:08:12.609 "data_size": 63488 00:08:12.609 }, 00:08:12.609 { 00:08:12.609 "name": "BaseBdev2", 00:08:12.609 "uuid": "6f73290d-18ef-40ed-a585-76063187dea6", 00:08:12.609 "is_configured": true, 00:08:12.609 "data_offset": 2048, 00:08:12.609 "data_size": 63488 00:08:12.609 } 00:08:12.609 ] 00:08:12.609 }' 00:08:12.609 15:17:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.610 15:17:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.179 [2024-11-10 15:17:19.245495] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.179 "name": "Existed_Raid", 00:08:13.179 "aliases": [ 00:08:13.179 "372b3649-9449-4d55-b025-7395bef38922" 00:08:13.179 ], 00:08:13.179 "product_name": "Raid Volume", 00:08:13.179 "block_size": 512, 00:08:13.179 "num_blocks": 126976, 00:08:13.179 "uuid": "372b3649-9449-4d55-b025-7395bef38922", 00:08:13.179 "assigned_rate_limits": { 00:08:13.179 "rw_ios_per_sec": 0, 00:08:13.179 "rw_mbytes_per_sec": 0, 00:08:13.179 "r_mbytes_per_sec": 0, 00:08:13.179 "w_mbytes_per_sec": 0 00:08:13.179 }, 00:08:13.179 "claimed": false, 00:08:13.179 "zoned": false, 00:08:13.179 "supported_io_types": { 00:08:13.179 "read": true, 00:08:13.179 "write": true, 00:08:13.179 "unmap": true, 00:08:13.179 "flush": true, 00:08:13.179 "reset": true, 00:08:13.179 "nvme_admin": false, 00:08:13.179 "nvme_io": false, 00:08:13.179 "nvme_io_md": false, 00:08:13.179 "write_zeroes": true, 00:08:13.179 "zcopy": false, 00:08:13.179 "get_zone_info": false, 00:08:13.179 "zone_management": false, 00:08:13.179 "zone_append": false, 00:08:13.179 "compare": false, 00:08:13.179 "compare_and_write": false, 00:08:13.179 "abort": false, 00:08:13.179 "seek_hole": false, 00:08:13.179 "seek_data": false, 00:08:13.179 "copy": false, 00:08:13.179 "nvme_iov_md": false 00:08:13.179 }, 00:08:13.179 "memory_domains": [ 00:08:13.179 { 00:08:13.179 "dma_device_id": "system", 00:08:13.179 "dma_device_type": 1 00:08:13.179 }, 00:08:13.179 { 00:08:13.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.179 "dma_device_type": 2 00:08:13.179 }, 00:08:13.179 { 00:08:13.179 "dma_device_id": "system", 00:08:13.179 "dma_device_type": 1 00:08:13.179 }, 00:08:13.179 { 00:08:13.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.179 "dma_device_type": 2 00:08:13.179 } 00:08:13.179 ], 00:08:13.179 "driver_specific": { 00:08:13.179 "raid": { 00:08:13.179 "uuid": "372b3649-9449-4d55-b025-7395bef38922", 00:08:13.179 "strip_size_kb": 64, 00:08:13.179 "state": "online", 00:08:13.179 "raid_level": "raid0", 00:08:13.179 "superblock": true, 00:08:13.179 "num_base_bdevs": 2, 00:08:13.179 "num_base_bdevs_discovered": 2, 00:08:13.179 "num_base_bdevs_operational": 2, 00:08:13.179 "base_bdevs_list": [ 00:08:13.179 { 00:08:13.179 "name": "BaseBdev1", 00:08:13.179 "uuid": "1b8ab275-334c-4b75-b395-09ba967a49cf", 00:08:13.179 "is_configured": true, 00:08:13.179 "data_offset": 2048, 00:08:13.179 "data_size": 63488 00:08:13.179 }, 00:08:13.179 { 00:08:13.179 "name": "BaseBdev2", 00:08:13.179 "uuid": "6f73290d-18ef-40ed-a585-76063187dea6", 00:08:13.179 "is_configured": true, 00:08:13.179 "data_offset": 2048, 00:08:13.179 "data_size": 63488 00:08:13.179 } 00:08:13.179 ] 00:08:13.179 } 00:08:13.179 } 00:08:13.179 }' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:13.179 BaseBdev2' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.179 [2024-11-10 15:17:19.437349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.179 [2024-11-10 15:17:19.437377] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.179 [2024-11-10 15:17:19.437430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.179 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.180 "name": "Existed_Raid", 00:08:13.180 "uuid": "372b3649-9449-4d55-b025-7395bef38922", 00:08:13.180 "strip_size_kb": 64, 00:08:13.180 "state": "offline", 00:08:13.180 "raid_level": "raid0", 00:08:13.180 "superblock": true, 00:08:13.180 "num_base_bdevs": 2, 00:08:13.180 "num_base_bdevs_discovered": 1, 00:08:13.180 "num_base_bdevs_operational": 1, 00:08:13.180 "base_bdevs_list": [ 00:08:13.180 { 00:08:13.180 "name": null, 00:08:13.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.180 "is_configured": false, 00:08:13.180 "data_offset": 0, 00:08:13.180 "data_size": 63488 00:08:13.180 }, 00:08:13.180 { 00:08:13.180 "name": "BaseBdev2", 00:08:13.180 "uuid": "6f73290d-18ef-40ed-a585-76063187dea6", 00:08:13.180 "is_configured": true, 00:08:13.180 "data_offset": 2048, 00:08:13.180 "data_size": 63488 00:08:13.180 } 00:08:13.180 ] 00:08:13.180 }' 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.180 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.782 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.782 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.782 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.782 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.782 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.783 [2024-11-10 15:17:19.917994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.783 [2024-11-10 15:17:19.918076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73732 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 73732 ']' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 73732 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:13.783 15:17:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73732 00:08:13.783 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:13.783 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:13.783 killing process with pid 73732 00:08:13.783 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73732' 00:08:13.783 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 73732 00:08:13.783 [2024-11-10 15:17:20.034385] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.783 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 73732 00:08:13.783 [2024-11-10 15:17:20.036129] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:14.055 15:17:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:14.055 00:08:14.055 real 0m3.896s 00:08:14.055 user 0m5.957s 00:08:14.055 sys 0m0.848s 00:08:14.055 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.055 15:17:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.055 ************************************ 00:08:14.055 END TEST raid_state_function_test_sb 00:08:14.055 ************************************ 00:08:14.314 15:17:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:14.314 15:17:20 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:14.314 15:17:20 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.314 15:17:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:14.314 ************************************ 00:08:14.314 START TEST raid_superblock_test 00:08:14.314 ************************************ 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 2 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73968 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73968 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 73968 ']' 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.314 15:17:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.314 [2024-11-10 15:17:20.525416] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:14.314 [2024-11-10 15:17:20.525558] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73968 ] 00:08:14.314 [2024-11-10 15:17:20.657444] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.573 [2024-11-10 15:17:20.681585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.573 [2024-11-10 15:17:20.722098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.573 [2024-11-10 15:17:20.799663] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:14.573 [2024-11-10 15:17:20.799706] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.142 malloc1 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.142 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.142 [2024-11-10 15:17:21.375921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:15.142 [2024-11-10 15:17:21.375991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.142 [2024-11-10 15:17:21.376031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:15.142 [2024-11-10 15:17:21.376047] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.142 [2024-11-10 15:17:21.378409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.143 [2024-11-10 15:17:21.378442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:15.143 pt1 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.143 malloc2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.143 [2024-11-10 15:17:21.410418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:15.143 [2024-11-10 15:17:21.410465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:15.143 [2024-11-10 15:17:21.410485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:15.143 [2024-11-10 15:17:21.410494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:15.143 [2024-11-10 15:17:21.412803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:15.143 [2024-11-10 15:17:21.412836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:15.143 pt2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.143 [2024-11-10 15:17:21.422450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:15.143 [2024-11-10 15:17:21.424500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:15.143 [2024-11-10 15:17:21.424648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:15.143 [2024-11-10 15:17:21.424660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:15.143 [2024-11-10 15:17:21.424910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:15.143 [2024-11-10 15:17:21.425040] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:15.143 [2024-11-10 15:17:21.425053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:15.143 [2024-11-10 15:17:21.425174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.143 "name": "raid_bdev1", 00:08:15.143 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:15.143 "strip_size_kb": 64, 00:08:15.143 "state": "online", 00:08:15.143 "raid_level": "raid0", 00:08:15.143 "superblock": true, 00:08:15.143 "num_base_bdevs": 2, 00:08:15.143 "num_base_bdevs_discovered": 2, 00:08:15.143 "num_base_bdevs_operational": 2, 00:08:15.143 "base_bdevs_list": [ 00:08:15.143 { 00:08:15.143 "name": "pt1", 00:08:15.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.143 "is_configured": true, 00:08:15.143 "data_offset": 2048, 00:08:15.143 "data_size": 63488 00:08:15.143 }, 00:08:15.143 { 00:08:15.143 "name": "pt2", 00:08:15.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.143 "is_configured": true, 00:08:15.143 "data_offset": 2048, 00:08:15.143 "data_size": 63488 00:08:15.143 } 00:08:15.143 ] 00:08:15.143 }' 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.143 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.712 [2024-11-10 15:17:21.902987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.712 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.712 "name": "raid_bdev1", 00:08:15.712 "aliases": [ 00:08:15.712 "0f7f46bb-f1f8-4340-967b-e3fa6f89e401" 00:08:15.712 ], 00:08:15.712 "product_name": "Raid Volume", 00:08:15.712 "block_size": 512, 00:08:15.712 "num_blocks": 126976, 00:08:15.712 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:15.712 "assigned_rate_limits": { 00:08:15.712 "rw_ios_per_sec": 0, 00:08:15.712 "rw_mbytes_per_sec": 0, 00:08:15.712 "r_mbytes_per_sec": 0, 00:08:15.712 "w_mbytes_per_sec": 0 00:08:15.712 }, 00:08:15.712 "claimed": false, 00:08:15.712 "zoned": false, 00:08:15.712 "supported_io_types": { 00:08:15.712 "read": true, 00:08:15.712 "write": true, 00:08:15.712 "unmap": true, 00:08:15.712 "flush": true, 00:08:15.712 "reset": true, 00:08:15.712 "nvme_admin": false, 00:08:15.712 "nvme_io": false, 00:08:15.712 "nvme_io_md": false, 00:08:15.712 "write_zeroes": true, 00:08:15.712 "zcopy": false, 00:08:15.712 "get_zone_info": false, 00:08:15.712 "zone_management": false, 00:08:15.712 "zone_append": false, 00:08:15.712 "compare": false, 00:08:15.712 "compare_and_write": false, 00:08:15.712 "abort": false, 00:08:15.712 "seek_hole": false, 00:08:15.712 "seek_data": false, 00:08:15.712 "copy": false, 00:08:15.712 "nvme_iov_md": false 00:08:15.712 }, 00:08:15.712 "memory_domains": [ 00:08:15.712 { 00:08:15.712 "dma_device_id": "system", 00:08:15.712 "dma_device_type": 1 00:08:15.712 }, 00:08:15.712 { 00:08:15.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.712 "dma_device_type": 2 00:08:15.712 }, 00:08:15.712 { 00:08:15.712 "dma_device_id": "system", 00:08:15.712 "dma_device_type": 1 00:08:15.712 }, 00:08:15.712 { 00:08:15.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.712 "dma_device_type": 2 00:08:15.712 } 00:08:15.712 ], 00:08:15.712 "driver_specific": { 00:08:15.712 "raid": { 00:08:15.712 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:15.712 "strip_size_kb": 64, 00:08:15.712 "state": "online", 00:08:15.712 "raid_level": "raid0", 00:08:15.712 "superblock": true, 00:08:15.712 "num_base_bdevs": 2, 00:08:15.712 "num_base_bdevs_discovered": 2, 00:08:15.712 "num_base_bdevs_operational": 2, 00:08:15.712 "base_bdevs_list": [ 00:08:15.712 { 00:08:15.712 "name": "pt1", 00:08:15.713 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.713 "is_configured": true, 00:08:15.713 "data_offset": 2048, 00:08:15.713 "data_size": 63488 00:08:15.713 }, 00:08:15.713 { 00:08:15.713 "name": "pt2", 00:08:15.713 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.713 "is_configured": true, 00:08:15.713 "data_offset": 2048, 00:08:15.713 "data_size": 63488 00:08:15.713 } 00:08:15.713 ] 00:08:15.713 } 00:08:15.713 } 00:08:15.713 }' 00:08:15.713 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.713 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.713 pt2' 00:08:15.713 15:17:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.713 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:15.973 [2024-11-10 15:17:22.118999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0f7f46bb-f1f8-4340-967b-e3fa6f89e401 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0f7f46bb-f1f8-4340-967b-e3fa6f89e401 ']' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 [2024-11-10 15:17:22.146672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.973 [2024-11-10 15:17:22.146709] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.973 [2024-11-10 15:17:22.146828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.973 [2024-11-10 15:17:22.146891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.973 [2024-11-10 15:17:22.146912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 [2024-11-10 15:17:22.282814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:15.973 [2024-11-10 15:17:22.285055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:15.973 [2024-11-10 15:17:22.285143] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:15.973 [2024-11-10 15:17:22.285204] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:15.973 [2024-11-10 15:17:22.285221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.973 [2024-11-10 15:17:22.285241] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:15.973 request: 00:08:15.973 { 00:08:15.973 "name": "raid_bdev1", 00:08:15.973 "raid_level": "raid0", 00:08:15.973 "base_bdevs": [ 00:08:15.973 "malloc1", 00:08:15.973 "malloc2" 00:08:15.973 ], 00:08:15.973 "strip_size_kb": 64, 00:08:15.973 "superblock": false, 00:08:15.973 "method": "bdev_raid_create", 00:08:15.973 "req_id": 1 00:08:15.973 } 00:08:15.973 Got JSON-RPC error response 00:08:15.973 response: 00:08:15.973 { 00:08:15.973 "code": -17, 00:08:15.973 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:15.973 } 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.973 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.233 [2024-11-10 15:17:22.346701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:16.233 [2024-11-10 15:17:22.346763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.233 [2024-11-10 15:17:22.346783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:16.233 [2024-11-10 15:17:22.346799] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.233 [2024-11-10 15:17:22.349318] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.233 [2024-11-10 15:17:22.349356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:16.233 [2024-11-10 15:17:22.349454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:16.233 [2024-11-10 15:17:22.349503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:16.233 pt1 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.233 "name": "raid_bdev1", 00:08:16.233 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:16.233 "strip_size_kb": 64, 00:08:16.233 "state": "configuring", 00:08:16.233 "raid_level": "raid0", 00:08:16.233 "superblock": true, 00:08:16.233 "num_base_bdevs": 2, 00:08:16.233 "num_base_bdevs_discovered": 1, 00:08:16.233 "num_base_bdevs_operational": 2, 00:08:16.233 "base_bdevs_list": [ 00:08:16.233 { 00:08:16.233 "name": "pt1", 00:08:16.233 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.233 "is_configured": true, 00:08:16.233 "data_offset": 2048, 00:08:16.233 "data_size": 63488 00:08:16.233 }, 00:08:16.233 { 00:08:16.233 "name": null, 00:08:16.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.233 "is_configured": false, 00:08:16.233 "data_offset": 2048, 00:08:16.233 "data_size": 63488 00:08:16.233 } 00:08:16.233 ] 00:08:16.233 }' 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.233 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.493 [2024-11-10 15:17:22.834892] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:16.493 [2024-11-10 15:17:22.834987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.493 [2024-11-10 15:17:22.835027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:16.493 [2024-11-10 15:17:22.835041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.493 [2024-11-10 15:17:22.835555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.493 [2024-11-10 15:17:22.835668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:16.493 [2024-11-10 15:17:22.835769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:16.493 [2024-11-10 15:17:22.835801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:16.493 [2024-11-10 15:17:22.835900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:16.493 [2024-11-10 15:17:22.835912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:16.493 [2024-11-10 15:17:22.836181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:16.493 [2024-11-10 15:17:22.836309] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:16.493 [2024-11-10 15:17:22.836318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:16.493 [2024-11-10 15:17:22.836432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:16.493 pt2 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.493 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.752 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.752 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.752 "name": "raid_bdev1", 00:08:16.752 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:16.752 "strip_size_kb": 64, 00:08:16.752 "state": "online", 00:08:16.752 "raid_level": "raid0", 00:08:16.752 "superblock": true, 00:08:16.752 "num_base_bdevs": 2, 00:08:16.752 "num_base_bdevs_discovered": 2, 00:08:16.752 "num_base_bdevs_operational": 2, 00:08:16.752 "base_bdevs_list": [ 00:08:16.752 { 00:08:16.752 "name": "pt1", 00:08:16.752 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:16.752 "is_configured": true, 00:08:16.752 "data_offset": 2048, 00:08:16.752 "data_size": 63488 00:08:16.752 }, 00:08:16.752 { 00:08:16.752 "name": "pt2", 00:08:16.752 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.752 "is_configured": true, 00:08:16.752 "data_offset": 2048, 00:08:16.752 "data_size": 63488 00:08:16.752 } 00:08:16.752 ] 00:08:16.752 }' 00:08:16.752 15:17:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.752 15:17:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.012 [2024-11-10 15:17:23.263364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.012 "name": "raid_bdev1", 00:08:17.012 "aliases": [ 00:08:17.012 "0f7f46bb-f1f8-4340-967b-e3fa6f89e401" 00:08:17.012 ], 00:08:17.012 "product_name": "Raid Volume", 00:08:17.012 "block_size": 512, 00:08:17.012 "num_blocks": 126976, 00:08:17.012 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:17.012 "assigned_rate_limits": { 00:08:17.012 "rw_ios_per_sec": 0, 00:08:17.012 "rw_mbytes_per_sec": 0, 00:08:17.012 "r_mbytes_per_sec": 0, 00:08:17.012 "w_mbytes_per_sec": 0 00:08:17.012 }, 00:08:17.012 "claimed": false, 00:08:17.012 "zoned": false, 00:08:17.012 "supported_io_types": { 00:08:17.012 "read": true, 00:08:17.012 "write": true, 00:08:17.012 "unmap": true, 00:08:17.012 "flush": true, 00:08:17.012 "reset": true, 00:08:17.012 "nvme_admin": false, 00:08:17.012 "nvme_io": false, 00:08:17.012 "nvme_io_md": false, 00:08:17.012 "write_zeroes": true, 00:08:17.012 "zcopy": false, 00:08:17.012 "get_zone_info": false, 00:08:17.012 "zone_management": false, 00:08:17.012 "zone_append": false, 00:08:17.012 "compare": false, 00:08:17.012 "compare_and_write": false, 00:08:17.012 "abort": false, 00:08:17.012 "seek_hole": false, 00:08:17.012 "seek_data": false, 00:08:17.012 "copy": false, 00:08:17.012 "nvme_iov_md": false 00:08:17.012 }, 00:08:17.012 "memory_domains": [ 00:08:17.012 { 00:08:17.012 "dma_device_id": "system", 00:08:17.012 "dma_device_type": 1 00:08:17.012 }, 00:08:17.012 { 00:08:17.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.012 "dma_device_type": 2 00:08:17.012 }, 00:08:17.012 { 00:08:17.012 "dma_device_id": "system", 00:08:17.012 "dma_device_type": 1 00:08:17.012 }, 00:08:17.012 { 00:08:17.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.012 "dma_device_type": 2 00:08:17.012 } 00:08:17.012 ], 00:08:17.012 "driver_specific": { 00:08:17.012 "raid": { 00:08:17.012 "uuid": "0f7f46bb-f1f8-4340-967b-e3fa6f89e401", 00:08:17.012 "strip_size_kb": 64, 00:08:17.012 "state": "online", 00:08:17.012 "raid_level": "raid0", 00:08:17.012 "superblock": true, 00:08:17.012 "num_base_bdevs": 2, 00:08:17.012 "num_base_bdevs_discovered": 2, 00:08:17.012 "num_base_bdevs_operational": 2, 00:08:17.012 "base_bdevs_list": [ 00:08:17.012 { 00:08:17.012 "name": "pt1", 00:08:17.012 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:17.012 "is_configured": true, 00:08:17.012 "data_offset": 2048, 00:08:17.012 "data_size": 63488 00:08:17.012 }, 00:08:17.012 { 00:08:17.012 "name": "pt2", 00:08:17.012 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:17.012 "is_configured": true, 00:08:17.012 "data_offset": 2048, 00:08:17.012 "data_size": 63488 00:08:17.012 } 00:08:17.012 ] 00:08:17.012 } 00:08:17.012 } 00:08:17.012 }' 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:17.012 pt2' 00:08:17.012 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.272 [2024-11-10 15:17:23.491434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0f7f46bb-f1f8-4340-967b-e3fa6f89e401 '!=' 0f7f46bb-f1f8-4340-967b-e3fa6f89e401 ']' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73968 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 73968 ']' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 73968 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73968 00:08:17.272 killing process with pid 73968 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73968' 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 73968 00:08:17.272 [2024-11-10 15:17:23.569053] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:17.272 [2024-11-10 15:17:23.569190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:17.272 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 73968 00:08:17.272 [2024-11-10 15:17:23.569248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:17.272 [2024-11-10 15:17:23.569263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:17.272 [2024-11-10 15:17:23.613145] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.841 15:17:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:17.841 00:08:17.841 real 0m3.513s 00:08:17.841 user 0m5.298s 00:08:17.841 sys 0m0.760s 00:08:17.841 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:17.841 15:17:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.841 ************************************ 00:08:17.841 END TEST raid_superblock_test 00:08:17.841 ************************************ 00:08:17.841 15:17:24 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:17.841 15:17:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:17.841 15:17:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:17.841 15:17:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.841 ************************************ 00:08:17.841 START TEST raid_read_error_test 00:08:17.841 ************************************ 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 read 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5xRAOi8shl 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74174 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74174 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 74174 ']' 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:17.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:17.841 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.841 [2024-11-10 15:17:24.129398] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:17.841 [2024-11-10 15:17:24.129574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74174 ] 00:08:18.100 [2024-11-10 15:17:24.269740] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.100 [2024-11-10 15:17:24.307834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.101 [2024-11-10 15:17:24.350806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.101 [2024-11-10 15:17:24.428412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.101 [2024-11-10 15:17:24.428461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.669 BaseBdev1_malloc 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.669 true 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.669 15:17:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.669 [2024-11-10 15:17:25.002457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.669 [2024-11-10 15:17:25.002525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.669 [2024-11-10 15:17:25.002546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.669 [2024-11-10 15:17:25.002569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.669 [2024-11-10 15:17:25.005198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.669 [2024-11-10 15:17:25.005239] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.669 BaseBdev1 00:08:18.669 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.669 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.669 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.669 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.669 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.928 BaseBdev2_malloc 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.928 true 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.928 [2024-11-10 15:17:25.049721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.928 [2024-11-10 15:17:25.049786] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.928 [2024-11-10 15:17:25.049805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.928 [2024-11-10 15:17:25.049816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.928 [2024-11-10 15:17:25.052287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.928 [2024-11-10 15:17:25.052323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.928 BaseBdev2 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.928 [2024-11-10 15:17:25.061735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.928 [2024-11-10 15:17:25.063971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.928 [2024-11-10 15:17:25.064188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:18.928 [2024-11-10 15:17:25.064215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.928 [2024-11-10 15:17:25.064488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:18.928 [2024-11-10 15:17:25.064669] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:18.928 [2024-11-10 15:17:25.064685] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:18.928 [2024-11-10 15:17:25.064842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.928 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.928 "name": "raid_bdev1", 00:08:18.928 "uuid": "d1211ea7-23ee-4ebe-9498-3db2412e586a", 00:08:18.929 "strip_size_kb": 64, 00:08:18.929 "state": "online", 00:08:18.929 "raid_level": "raid0", 00:08:18.929 "superblock": true, 00:08:18.929 "num_base_bdevs": 2, 00:08:18.929 "num_base_bdevs_discovered": 2, 00:08:18.929 "num_base_bdevs_operational": 2, 00:08:18.929 "base_bdevs_list": [ 00:08:18.929 { 00:08:18.929 "name": "BaseBdev1", 00:08:18.929 "uuid": "4e44ddf4-d140-5a98-bc83-f75898439bc4", 00:08:18.929 "is_configured": true, 00:08:18.929 "data_offset": 2048, 00:08:18.929 "data_size": 63488 00:08:18.929 }, 00:08:18.929 { 00:08:18.929 "name": "BaseBdev2", 00:08:18.929 "uuid": "869046a5-89d6-5173-ab45-f52faf8fccfa", 00:08:18.929 "is_configured": true, 00:08:18.929 "data_offset": 2048, 00:08:18.929 "data_size": 63488 00:08:18.929 } 00:08:18.929 ] 00:08:18.929 }' 00:08:18.929 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.929 15:17:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.188 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:19.188 15:17:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:19.447 [2024-11-10 15:17:25.594488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.386 "name": "raid_bdev1", 00:08:20.386 "uuid": "d1211ea7-23ee-4ebe-9498-3db2412e586a", 00:08:20.386 "strip_size_kb": 64, 00:08:20.386 "state": "online", 00:08:20.386 "raid_level": "raid0", 00:08:20.386 "superblock": true, 00:08:20.386 "num_base_bdevs": 2, 00:08:20.386 "num_base_bdevs_discovered": 2, 00:08:20.386 "num_base_bdevs_operational": 2, 00:08:20.386 "base_bdevs_list": [ 00:08:20.386 { 00:08:20.386 "name": "BaseBdev1", 00:08:20.386 "uuid": "4e44ddf4-d140-5a98-bc83-f75898439bc4", 00:08:20.386 "is_configured": true, 00:08:20.386 "data_offset": 2048, 00:08:20.386 "data_size": 63488 00:08:20.386 }, 00:08:20.386 { 00:08:20.386 "name": "BaseBdev2", 00:08:20.386 "uuid": "869046a5-89d6-5173-ab45-f52faf8fccfa", 00:08:20.386 "is_configured": true, 00:08:20.386 "data_offset": 2048, 00:08:20.386 "data_size": 63488 00:08:20.386 } 00:08:20.386 ] 00:08:20.386 }' 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.386 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.653 [2024-11-10 15:17:26.965740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:20.653 [2024-11-10 15:17:26.965787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:20.653 [2024-11-10 15:17:26.968350] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.653 [2024-11-10 15:17:26.968413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.653 [2024-11-10 15:17:26.968450] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.653 [2024-11-10 15:17:26.968471] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:20.653 { 00:08:20.653 "results": [ 00:08:20.653 { 00:08:20.653 "job": "raid_bdev1", 00:08:20.653 "core_mask": "0x1", 00:08:20.653 "workload": "randrw", 00:08:20.653 "percentage": 50, 00:08:20.653 "status": "finished", 00:08:20.653 "queue_depth": 1, 00:08:20.653 "io_size": 131072, 00:08:20.653 "runtime": 1.36898, 00:08:20.653 "iops": 14482.315300442666, 00:08:20.653 "mibps": 1810.2894125553332, 00:08:20.653 "io_failed": 1, 00:08:20.653 "io_timeout": 0, 00:08:20.653 "avg_latency_us": 96.87858478409497, 00:08:20.653 "min_latency_us": 25.43711322234812, 00:08:20.653 "max_latency_us": 1456.6094308376187 00:08:20.653 } 00:08:20.653 ], 00:08:20.653 "core_count": 1 00:08:20.653 } 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74174 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 74174 ']' 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 74174 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:20.653 15:17:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74174 00:08:20.653 killing process with pid 74174 00:08:20.653 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:20.653 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:20.653 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74174' 00:08:20.653 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 74174 00:08:20.653 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 74174 00:08:20.653 [2024-11-10 15:17:27.007901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.912 [2024-11-10 15:17:27.036577] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5xRAOi8shl 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:21.172 00:08:21.172 real 0m3.347s 00:08:21.172 user 0m4.132s 00:08:21.172 sys 0m0.610s 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:21.172 15:17:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.172 ************************************ 00:08:21.172 END TEST raid_read_error_test 00:08:21.172 ************************************ 00:08:21.172 15:17:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:21.172 15:17:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:21.172 15:17:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:21.172 15:17:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.172 ************************************ 00:08:21.172 START TEST raid_write_error_test 00:08:21.172 ************************************ 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 2 write 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l16NAUML1C 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74303 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74303 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 74303 ']' 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:21.172 15:17:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.432 [2024-11-10 15:17:27.542169] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:21.432 [2024-11-10 15:17:27.542365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74303 ] 00:08:21.432 [2024-11-10 15:17:27.680307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.432 [2024-11-10 15:17:27.719093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.432 [2024-11-10 15:17:27.761166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.691 [2024-11-10 15:17:27.838295] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.691 [2024-11-10 15:17:27.838338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.259 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:22.259 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:22.259 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.259 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:22.259 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.259 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 BaseBdev1_malloc 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 true 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 [2024-11-10 15:17:28.445800] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:22.260 [2024-11-10 15:17:28.445877] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.260 [2024-11-10 15:17:28.445898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:22.260 [2024-11-10 15:17:28.445915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.260 [2024-11-10 15:17:28.448494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.260 [2024-11-10 15:17:28.448557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:22.260 BaseBdev1 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 BaseBdev2_malloc 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 true 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 [2024-11-10 15:17:28.492677] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:22.260 [2024-11-10 15:17:28.492740] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.260 [2024-11-10 15:17:28.492760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:22.260 [2024-11-10 15:17:28.492772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.260 [2024-11-10 15:17:28.495285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.260 [2024-11-10 15:17:28.495412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:22.260 BaseBdev2 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 [2024-11-10 15:17:28.504729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.260 [2024-11-10 15:17:28.507043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.260 [2024-11-10 15:17:28.507245] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:22.260 [2024-11-10 15:17:28.507263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.260 [2024-11-10 15:17:28.507586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:22.260 [2024-11-10 15:17:28.507766] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:22.260 [2024-11-10 15:17:28.507777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:22.260 [2024-11-10 15:17:28.507933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.260 "name": "raid_bdev1", 00:08:22.260 "uuid": "6dbf8413-71c5-442c-987a-ae81cc93d7ee", 00:08:22.260 "strip_size_kb": 64, 00:08:22.260 "state": "online", 00:08:22.260 "raid_level": "raid0", 00:08:22.260 "superblock": true, 00:08:22.260 "num_base_bdevs": 2, 00:08:22.260 "num_base_bdevs_discovered": 2, 00:08:22.260 "num_base_bdevs_operational": 2, 00:08:22.260 "base_bdevs_list": [ 00:08:22.260 { 00:08:22.260 "name": "BaseBdev1", 00:08:22.260 "uuid": "c2c99d55-5e15-5f72-a195-cf7867ac2417", 00:08:22.260 "is_configured": true, 00:08:22.260 "data_offset": 2048, 00:08:22.260 "data_size": 63488 00:08:22.260 }, 00:08:22.260 { 00:08:22.260 "name": "BaseBdev2", 00:08:22.260 "uuid": "509c3fca-d860-59d0-8e81-6595a256856b", 00:08:22.260 "is_configured": true, 00:08:22.260 "data_offset": 2048, 00:08:22.260 "data_size": 63488 00:08:22.260 } 00:08:22.260 ] 00:08:22.260 }' 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.260 15:17:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.829 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:22.829 15:17:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:22.829 [2024-11-10 15:17:28.997425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.777 "name": "raid_bdev1", 00:08:23.777 "uuid": "6dbf8413-71c5-442c-987a-ae81cc93d7ee", 00:08:23.777 "strip_size_kb": 64, 00:08:23.777 "state": "online", 00:08:23.777 "raid_level": "raid0", 00:08:23.777 "superblock": true, 00:08:23.777 "num_base_bdevs": 2, 00:08:23.777 "num_base_bdevs_discovered": 2, 00:08:23.777 "num_base_bdevs_operational": 2, 00:08:23.777 "base_bdevs_list": [ 00:08:23.777 { 00:08:23.777 "name": "BaseBdev1", 00:08:23.777 "uuid": "c2c99d55-5e15-5f72-a195-cf7867ac2417", 00:08:23.777 "is_configured": true, 00:08:23.777 "data_offset": 2048, 00:08:23.777 "data_size": 63488 00:08:23.777 }, 00:08:23.777 { 00:08:23.777 "name": "BaseBdev2", 00:08:23.777 "uuid": "509c3fca-d860-59d0-8e81-6595a256856b", 00:08:23.777 "is_configured": true, 00:08:23.777 "data_offset": 2048, 00:08:23.777 "data_size": 63488 00:08:23.777 } 00:08:23.777 ] 00:08:23.777 }' 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.777 15:17:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.037 [2024-11-10 15:17:30.324836] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.037 [2024-11-10 15:17:30.324966] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.037 [2024-11-10 15:17:30.327574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.037 [2024-11-10 15:17:30.327676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.037 [2024-11-10 15:17:30.327734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.037 [2024-11-10 15:17:30.327805] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:24.037 { 00:08:24.037 "results": [ 00:08:24.037 { 00:08:24.037 "job": "raid_bdev1", 00:08:24.037 "core_mask": "0x1", 00:08:24.037 "workload": "randrw", 00:08:24.037 "percentage": 50, 00:08:24.037 "status": "finished", 00:08:24.037 "queue_depth": 1, 00:08:24.037 "io_size": 131072, 00:08:24.037 "runtime": 1.324903, 00:08:24.037 "iops": 14515.7796457552, 00:08:24.037 "mibps": 1814.4724557194, 00:08:24.037 "io_failed": 1, 00:08:24.037 "io_timeout": 0, 00:08:24.037 "avg_latency_us": 96.83910124457464, 00:08:24.037 "min_latency_us": 25.325546936285193, 00:08:24.037 "max_latency_us": 1378.0667654493159 00:08:24.037 } 00:08:24.037 ], 00:08:24.037 "core_count": 1 00:08:24.037 } 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74303 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 74303 ']' 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 74303 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74303 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74303' 00:08:24.037 killing process with pid 74303 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 74303 00:08:24.037 [2024-11-10 15:17:30.376069] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.037 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 74303 00:08:24.297 [2024-11-10 15:17:30.405793] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l16NAUML1C 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:24.557 ************************************ 00:08:24.557 END TEST raid_write_error_test 00:08:24.557 ************************************ 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:24.557 00:08:24.557 real 0m3.304s 00:08:24.557 user 0m4.066s 00:08:24.557 sys 0m0.564s 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.557 15:17:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.557 15:17:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:24.557 15:17:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:24.557 15:17:30 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:24.557 15:17:30 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.557 15:17:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.557 ************************************ 00:08:24.557 START TEST raid_state_function_test 00:08:24.557 ************************************ 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 false 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:24.557 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74430 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74430' 00:08:24.558 Process raid pid: 74430 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74430 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 74430 ']' 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.558 15:17:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.558 [2024-11-10 15:17:30.894468] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:24.558 [2024-11-10 15:17:30.894599] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.817 [2024-11-10 15:17:31.028860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.817 [2024-11-10 15:17:31.048994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.817 [2024-11-10 15:17:31.090248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.817 [2024-11-10 15:17:31.167031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:24.817 [2024-11-10 15:17:31.167071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.390 [2024-11-10 15:17:31.732294] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.390 [2024-11-10 15:17:31.732358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.390 [2024-11-10 15:17:31.732374] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.390 [2024-11-10 15:17:31.732384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.390 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.650 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.650 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.650 "name": "Existed_Raid", 00:08:25.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.650 "strip_size_kb": 64, 00:08:25.650 "state": "configuring", 00:08:25.650 "raid_level": "concat", 00:08:25.650 "superblock": false, 00:08:25.650 "num_base_bdevs": 2, 00:08:25.650 "num_base_bdevs_discovered": 0, 00:08:25.650 "num_base_bdevs_operational": 2, 00:08:25.650 "base_bdevs_list": [ 00:08:25.650 { 00:08:25.650 "name": "BaseBdev1", 00:08:25.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.650 "is_configured": false, 00:08:25.650 "data_offset": 0, 00:08:25.650 "data_size": 0 00:08:25.650 }, 00:08:25.650 { 00:08:25.650 "name": "BaseBdev2", 00:08:25.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.650 "is_configured": false, 00:08:25.650 "data_offset": 0, 00:08:25.650 "data_size": 0 00:08:25.650 } 00:08:25.650 ] 00:08:25.650 }' 00:08:25.650 15:17:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.650 15:17:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 [2024-11-10 15:17:32.164338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.910 [2024-11-10 15:17:32.164433] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 [2024-11-10 15:17:32.172341] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:25.910 [2024-11-10 15:17:32.172462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:25.910 [2024-11-10 15:17:32.172502] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.910 [2024-11-10 15:17:32.172528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 [2024-11-10 15:17:32.189519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.910 BaseBdev1 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 [ 00:08:25.910 { 00:08:25.910 "name": "BaseBdev1", 00:08:25.910 "aliases": [ 00:08:25.910 "7f78bc81-114a-46b4-91d7-066e1227efdc" 00:08:25.910 ], 00:08:25.910 "product_name": "Malloc disk", 00:08:25.910 "block_size": 512, 00:08:25.910 "num_blocks": 65536, 00:08:25.910 "uuid": "7f78bc81-114a-46b4-91d7-066e1227efdc", 00:08:25.910 "assigned_rate_limits": { 00:08:25.910 "rw_ios_per_sec": 0, 00:08:25.910 "rw_mbytes_per_sec": 0, 00:08:25.910 "r_mbytes_per_sec": 0, 00:08:25.910 "w_mbytes_per_sec": 0 00:08:25.910 }, 00:08:25.910 "claimed": true, 00:08:25.910 "claim_type": "exclusive_write", 00:08:25.910 "zoned": false, 00:08:25.910 "supported_io_types": { 00:08:25.910 "read": true, 00:08:25.910 "write": true, 00:08:25.910 "unmap": true, 00:08:25.910 "flush": true, 00:08:25.910 "reset": true, 00:08:25.910 "nvme_admin": false, 00:08:25.910 "nvme_io": false, 00:08:25.910 "nvme_io_md": false, 00:08:25.910 "write_zeroes": true, 00:08:25.910 "zcopy": true, 00:08:25.910 "get_zone_info": false, 00:08:25.910 "zone_management": false, 00:08:25.910 "zone_append": false, 00:08:25.910 "compare": false, 00:08:25.910 "compare_and_write": false, 00:08:25.910 "abort": true, 00:08:25.910 "seek_hole": false, 00:08:25.910 "seek_data": false, 00:08:25.910 "copy": true, 00:08:25.910 "nvme_iov_md": false 00:08:25.910 }, 00:08:25.910 "memory_domains": [ 00:08:25.910 { 00:08:25.910 "dma_device_id": "system", 00:08:25.910 "dma_device_type": 1 00:08:25.910 }, 00:08:25.910 { 00:08:25.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.910 "dma_device_type": 2 00:08:25.910 } 00:08:25.910 ], 00:08:25.910 "driver_specific": {} 00:08:25.910 } 00:08:25.910 ] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.910 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.910 "name": "Existed_Raid", 00:08:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.910 "strip_size_kb": 64, 00:08:25.910 "state": "configuring", 00:08:25.910 "raid_level": "concat", 00:08:25.910 "superblock": false, 00:08:25.910 "num_base_bdevs": 2, 00:08:25.910 "num_base_bdevs_discovered": 1, 00:08:25.910 "num_base_bdevs_operational": 2, 00:08:25.910 "base_bdevs_list": [ 00:08:25.910 { 00:08:25.910 "name": "BaseBdev1", 00:08:25.910 "uuid": "7f78bc81-114a-46b4-91d7-066e1227efdc", 00:08:25.910 "is_configured": true, 00:08:25.910 "data_offset": 0, 00:08:25.911 "data_size": 65536 00:08:25.911 }, 00:08:25.911 { 00:08:25.911 "name": "BaseBdev2", 00:08:25.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.911 "is_configured": false, 00:08:25.911 "data_offset": 0, 00:08:25.911 "data_size": 0 00:08:25.911 } 00:08:25.911 ] 00:08:25.911 }' 00:08:25.911 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.911 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.480 [2024-11-10 15:17:32.621697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:26.480 [2024-11-10 15:17:32.621766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.480 [2024-11-10 15:17:32.633724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.480 [2024-11-10 15:17:32.635586] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:26.480 [2024-11-10 15:17:32.635643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.480 "name": "Existed_Raid", 00:08:26.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.480 "strip_size_kb": 64, 00:08:26.480 "state": "configuring", 00:08:26.480 "raid_level": "concat", 00:08:26.480 "superblock": false, 00:08:26.480 "num_base_bdevs": 2, 00:08:26.480 "num_base_bdevs_discovered": 1, 00:08:26.480 "num_base_bdevs_operational": 2, 00:08:26.480 "base_bdevs_list": [ 00:08:26.480 { 00:08:26.480 "name": "BaseBdev1", 00:08:26.480 "uuid": "7f78bc81-114a-46b4-91d7-066e1227efdc", 00:08:26.480 "is_configured": true, 00:08:26.480 "data_offset": 0, 00:08:26.480 "data_size": 65536 00:08:26.480 }, 00:08:26.480 { 00:08:26.480 "name": "BaseBdev2", 00:08:26.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.480 "is_configured": false, 00:08:26.480 "data_offset": 0, 00:08:26.480 "data_size": 0 00:08:26.480 } 00:08:26.480 ] 00:08:26.480 }' 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.480 15:17:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.740 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.740 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.740 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.000 [2024-11-10 15:17:33.105206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.000 [2024-11-10 15:17:33.105357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:27.000 [2024-11-10 15:17:33.105390] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:27.000 [2024-11-10 15:17:33.105687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:27.000 [2024-11-10 15:17:33.105897] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:27.000 [2024-11-10 15:17:33.105944] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:27.000 [2024-11-10 15:17:33.106230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.000 BaseBdev2 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:27.000 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.001 [ 00:08:27.001 { 00:08:27.001 "name": "BaseBdev2", 00:08:27.001 "aliases": [ 00:08:27.001 "6a40ab50-18e9-43b8-96fd-9e766e5df430" 00:08:27.001 ], 00:08:27.001 "product_name": "Malloc disk", 00:08:27.001 "block_size": 512, 00:08:27.001 "num_blocks": 65536, 00:08:27.001 "uuid": "6a40ab50-18e9-43b8-96fd-9e766e5df430", 00:08:27.001 "assigned_rate_limits": { 00:08:27.001 "rw_ios_per_sec": 0, 00:08:27.001 "rw_mbytes_per_sec": 0, 00:08:27.001 "r_mbytes_per_sec": 0, 00:08:27.001 "w_mbytes_per_sec": 0 00:08:27.001 }, 00:08:27.001 "claimed": true, 00:08:27.001 "claim_type": "exclusive_write", 00:08:27.001 "zoned": false, 00:08:27.001 "supported_io_types": { 00:08:27.001 "read": true, 00:08:27.001 "write": true, 00:08:27.001 "unmap": true, 00:08:27.001 "flush": true, 00:08:27.001 "reset": true, 00:08:27.001 "nvme_admin": false, 00:08:27.001 "nvme_io": false, 00:08:27.001 "nvme_io_md": false, 00:08:27.001 "write_zeroes": true, 00:08:27.001 "zcopy": true, 00:08:27.001 "get_zone_info": false, 00:08:27.001 "zone_management": false, 00:08:27.001 "zone_append": false, 00:08:27.001 "compare": false, 00:08:27.001 "compare_and_write": false, 00:08:27.001 "abort": true, 00:08:27.001 "seek_hole": false, 00:08:27.001 "seek_data": false, 00:08:27.001 "copy": true, 00:08:27.001 "nvme_iov_md": false 00:08:27.001 }, 00:08:27.001 "memory_domains": [ 00:08:27.001 { 00:08:27.001 "dma_device_id": "system", 00:08:27.001 "dma_device_type": 1 00:08:27.001 }, 00:08:27.001 { 00:08:27.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.001 "dma_device_type": 2 00:08:27.001 } 00:08:27.001 ], 00:08:27.001 "driver_specific": {} 00:08:27.001 } 00:08:27.001 ] 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.001 "name": "Existed_Raid", 00:08:27.001 "uuid": "858f0278-4256-48ba-81d6-68d35150c704", 00:08:27.001 "strip_size_kb": 64, 00:08:27.001 "state": "online", 00:08:27.001 "raid_level": "concat", 00:08:27.001 "superblock": false, 00:08:27.001 "num_base_bdevs": 2, 00:08:27.001 "num_base_bdevs_discovered": 2, 00:08:27.001 "num_base_bdevs_operational": 2, 00:08:27.001 "base_bdevs_list": [ 00:08:27.001 { 00:08:27.001 "name": "BaseBdev1", 00:08:27.001 "uuid": "7f78bc81-114a-46b4-91d7-066e1227efdc", 00:08:27.001 "is_configured": true, 00:08:27.001 "data_offset": 0, 00:08:27.001 "data_size": 65536 00:08:27.001 }, 00:08:27.001 { 00:08:27.001 "name": "BaseBdev2", 00:08:27.001 "uuid": "6a40ab50-18e9-43b8-96fd-9e766e5df430", 00:08:27.001 "is_configured": true, 00:08:27.001 "data_offset": 0, 00:08:27.001 "data_size": 65536 00:08:27.001 } 00:08:27.001 ] 00:08:27.001 }' 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.001 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.260 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.260 [2024-11-10 15:17:33.621766] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.520 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.520 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.520 "name": "Existed_Raid", 00:08:27.520 "aliases": [ 00:08:27.520 "858f0278-4256-48ba-81d6-68d35150c704" 00:08:27.520 ], 00:08:27.520 "product_name": "Raid Volume", 00:08:27.520 "block_size": 512, 00:08:27.520 "num_blocks": 131072, 00:08:27.520 "uuid": "858f0278-4256-48ba-81d6-68d35150c704", 00:08:27.520 "assigned_rate_limits": { 00:08:27.520 "rw_ios_per_sec": 0, 00:08:27.520 "rw_mbytes_per_sec": 0, 00:08:27.520 "r_mbytes_per_sec": 0, 00:08:27.520 "w_mbytes_per_sec": 0 00:08:27.520 }, 00:08:27.520 "claimed": false, 00:08:27.520 "zoned": false, 00:08:27.520 "supported_io_types": { 00:08:27.520 "read": true, 00:08:27.520 "write": true, 00:08:27.520 "unmap": true, 00:08:27.520 "flush": true, 00:08:27.520 "reset": true, 00:08:27.520 "nvme_admin": false, 00:08:27.520 "nvme_io": false, 00:08:27.520 "nvme_io_md": false, 00:08:27.520 "write_zeroes": true, 00:08:27.520 "zcopy": false, 00:08:27.520 "get_zone_info": false, 00:08:27.520 "zone_management": false, 00:08:27.520 "zone_append": false, 00:08:27.520 "compare": false, 00:08:27.520 "compare_and_write": false, 00:08:27.520 "abort": false, 00:08:27.520 "seek_hole": false, 00:08:27.520 "seek_data": false, 00:08:27.520 "copy": false, 00:08:27.520 "nvme_iov_md": false 00:08:27.520 }, 00:08:27.520 "memory_domains": [ 00:08:27.520 { 00:08:27.520 "dma_device_id": "system", 00:08:27.520 "dma_device_type": 1 00:08:27.520 }, 00:08:27.520 { 00:08:27.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.520 "dma_device_type": 2 00:08:27.520 }, 00:08:27.520 { 00:08:27.520 "dma_device_id": "system", 00:08:27.520 "dma_device_type": 1 00:08:27.520 }, 00:08:27.520 { 00:08:27.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.520 "dma_device_type": 2 00:08:27.520 } 00:08:27.520 ], 00:08:27.520 "driver_specific": { 00:08:27.520 "raid": { 00:08:27.520 "uuid": "858f0278-4256-48ba-81d6-68d35150c704", 00:08:27.520 "strip_size_kb": 64, 00:08:27.520 "state": "online", 00:08:27.520 "raid_level": "concat", 00:08:27.520 "superblock": false, 00:08:27.520 "num_base_bdevs": 2, 00:08:27.520 "num_base_bdevs_discovered": 2, 00:08:27.520 "num_base_bdevs_operational": 2, 00:08:27.520 "base_bdevs_list": [ 00:08:27.520 { 00:08:27.520 "name": "BaseBdev1", 00:08:27.520 "uuid": "7f78bc81-114a-46b4-91d7-066e1227efdc", 00:08:27.520 "is_configured": true, 00:08:27.520 "data_offset": 0, 00:08:27.520 "data_size": 65536 00:08:27.520 }, 00:08:27.520 { 00:08:27.520 "name": "BaseBdev2", 00:08:27.520 "uuid": "6a40ab50-18e9-43b8-96fd-9e766e5df430", 00:08:27.520 "is_configured": true, 00:08:27.520 "data_offset": 0, 00:08:27.520 "data_size": 65536 00:08:27.520 } 00:08:27.520 ] 00:08:27.520 } 00:08:27.520 } 00:08:27.521 }' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:27.521 BaseBdev2' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 [2024-11-10 15:17:33.845580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:27.521 [2024-11-10 15:17:33.845623] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.521 [2024-11-10 15:17:33.845693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.521 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.780 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.780 "name": "Existed_Raid", 00:08:27.780 "uuid": "858f0278-4256-48ba-81d6-68d35150c704", 00:08:27.780 "strip_size_kb": 64, 00:08:27.780 "state": "offline", 00:08:27.780 "raid_level": "concat", 00:08:27.780 "superblock": false, 00:08:27.780 "num_base_bdevs": 2, 00:08:27.780 "num_base_bdevs_discovered": 1, 00:08:27.780 "num_base_bdevs_operational": 1, 00:08:27.780 "base_bdevs_list": [ 00:08:27.780 { 00:08:27.780 "name": null, 00:08:27.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.781 "is_configured": false, 00:08:27.781 "data_offset": 0, 00:08:27.781 "data_size": 65536 00:08:27.781 }, 00:08:27.781 { 00:08:27.781 "name": "BaseBdev2", 00:08:27.781 "uuid": "6a40ab50-18e9-43b8-96fd-9e766e5df430", 00:08:27.781 "is_configured": true, 00:08:27.781 "data_offset": 0, 00:08:27.781 "data_size": 65536 00:08:27.781 } 00:08:27.781 ] 00:08:27.781 }' 00:08:27.781 15:17:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.781 15:17:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 [2024-11-10 15:17:34.357252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:28.040 [2024-11-10 15:17:34.357385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.040 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74430 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 74430 ']' 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 74430 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74430 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:28.300 killing process with pid 74430 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74430' 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 74430 00:08:28.300 [2024-11-10 15:17:34.461876] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.300 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 74430 00:08:28.300 [2024-11-10 15:17:34.462936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.560 15:17:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:28.560 ************************************ 00:08:28.561 END TEST raid_state_function_test 00:08:28.561 ************************************ 00:08:28.561 00:08:28.561 real 0m3.887s 00:08:28.561 user 0m6.099s 00:08:28.561 sys 0m0.809s 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 15:17:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:28.561 15:17:34 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:28.561 15:17:34 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.561 15:17:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 ************************************ 00:08:28.561 START TEST raid_state_function_test_sb 00:08:28.561 ************************************ 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 2 true 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74672 00:08:28.561 Process raid pid: 74672 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74672' 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74672 00:08:28.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 74672 ']' 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:28.561 15:17:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.561 [2024-11-10 15:17:34.860990] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:28.561 [2024-11-10 15:17:34.861132] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.820 [2024-11-10 15:17:35.000745] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.821 [2024-11-10 15:17:35.042233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.821 [2024-11-10 15:17:35.068497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.821 [2024-11-10 15:17:35.113536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.821 [2024-11-10 15:17:35.113569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.390 [2024-11-10 15:17:35.693269] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.390 [2024-11-10 15:17:35.693397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.390 [2024-11-10 15:17:35.693415] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.390 [2024-11-10 15:17:35.693425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.390 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.391 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.391 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.391 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.391 "name": "Existed_Raid", 00:08:29.391 "uuid": "ddbade57-cc2f-4d47-a8ab-fcde224f8e79", 00:08:29.391 "strip_size_kb": 64, 00:08:29.391 "state": "configuring", 00:08:29.391 "raid_level": "concat", 00:08:29.391 "superblock": true, 00:08:29.391 "num_base_bdevs": 2, 00:08:29.391 "num_base_bdevs_discovered": 0, 00:08:29.391 "num_base_bdevs_operational": 2, 00:08:29.391 "base_bdevs_list": [ 00:08:29.391 { 00:08:29.391 "name": "BaseBdev1", 00:08:29.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.391 "is_configured": false, 00:08:29.391 "data_offset": 0, 00:08:29.391 "data_size": 0 00:08:29.391 }, 00:08:29.391 { 00:08:29.391 "name": "BaseBdev2", 00:08:29.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.391 "is_configured": false, 00:08:29.391 "data_offset": 0, 00:08:29.391 "data_size": 0 00:08:29.391 } 00:08:29.391 ] 00:08:29.391 }' 00:08:29.391 15:17:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.391 15:17:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.960 [2024-11-10 15:17:36.137285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:29.960 [2024-11-10 15:17:36.137373] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.960 [2024-11-10 15:17:36.149303] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.960 [2024-11-10 15:17:36.149390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.960 [2024-11-10 15:17:36.149428] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.960 [2024-11-10 15:17:36.149454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.960 [2024-11-10 15:17:36.170566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:29.960 BaseBdev1 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.960 [ 00:08:29.960 { 00:08:29.960 "name": "BaseBdev1", 00:08:29.960 "aliases": [ 00:08:29.960 "7ba7e7b7-bb0a-4e41-84ef-1ef6ce110f37" 00:08:29.960 ], 00:08:29.960 "product_name": "Malloc disk", 00:08:29.960 "block_size": 512, 00:08:29.960 "num_blocks": 65536, 00:08:29.960 "uuid": "7ba7e7b7-bb0a-4e41-84ef-1ef6ce110f37", 00:08:29.960 "assigned_rate_limits": { 00:08:29.960 "rw_ios_per_sec": 0, 00:08:29.960 "rw_mbytes_per_sec": 0, 00:08:29.960 "r_mbytes_per_sec": 0, 00:08:29.960 "w_mbytes_per_sec": 0 00:08:29.960 }, 00:08:29.960 "claimed": true, 00:08:29.960 "claim_type": "exclusive_write", 00:08:29.960 "zoned": false, 00:08:29.960 "supported_io_types": { 00:08:29.960 "read": true, 00:08:29.960 "write": true, 00:08:29.960 "unmap": true, 00:08:29.960 "flush": true, 00:08:29.960 "reset": true, 00:08:29.960 "nvme_admin": false, 00:08:29.960 "nvme_io": false, 00:08:29.960 "nvme_io_md": false, 00:08:29.960 "write_zeroes": true, 00:08:29.960 "zcopy": true, 00:08:29.960 "get_zone_info": false, 00:08:29.960 "zone_management": false, 00:08:29.960 "zone_append": false, 00:08:29.960 "compare": false, 00:08:29.960 "compare_and_write": false, 00:08:29.960 "abort": true, 00:08:29.960 "seek_hole": false, 00:08:29.960 "seek_data": false, 00:08:29.960 "copy": true, 00:08:29.960 "nvme_iov_md": false 00:08:29.960 }, 00:08:29.960 "memory_domains": [ 00:08:29.960 { 00:08:29.960 "dma_device_id": "system", 00:08:29.960 "dma_device_type": 1 00:08:29.960 }, 00:08:29.960 { 00:08:29.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.960 "dma_device_type": 2 00:08:29.960 } 00:08:29.960 ], 00:08:29.960 "driver_specific": {} 00:08:29.960 } 00:08:29.960 ] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.960 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.961 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:29.961 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.961 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:29.961 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.961 "name": "Existed_Raid", 00:08:29.961 "uuid": "4c141369-6658-4a7b-8982-8fe9bb10dd14", 00:08:29.961 "strip_size_kb": 64, 00:08:29.961 "state": "configuring", 00:08:29.961 "raid_level": "concat", 00:08:29.961 "superblock": true, 00:08:29.961 "num_base_bdevs": 2, 00:08:29.961 "num_base_bdevs_discovered": 1, 00:08:29.961 "num_base_bdevs_operational": 2, 00:08:29.961 "base_bdevs_list": [ 00:08:29.961 { 00:08:29.961 "name": "BaseBdev1", 00:08:29.961 "uuid": "7ba7e7b7-bb0a-4e41-84ef-1ef6ce110f37", 00:08:29.961 "is_configured": true, 00:08:29.961 "data_offset": 2048, 00:08:29.961 "data_size": 63488 00:08:29.961 }, 00:08:29.961 { 00:08:29.961 "name": "BaseBdev2", 00:08:29.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.961 "is_configured": false, 00:08:29.961 "data_offset": 0, 00:08:29.961 "data_size": 0 00:08:29.961 } 00:08:29.961 ] 00:08:29.961 }' 00:08:29.961 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.961 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.221 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.480 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.480 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.480 [2024-11-10 15:17:36.586720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.480 [2024-11-10 15:17:36.586832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:30.480 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.480 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.481 [2024-11-10 15:17:36.598755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.481 [2024-11-10 15:17:36.600619] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.481 [2024-11-10 15:17:36.600682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.481 "name": "Existed_Raid", 00:08:30.481 "uuid": "f840e7a4-0721-45f0-94ff-fe1c3f41ed00", 00:08:30.481 "strip_size_kb": 64, 00:08:30.481 "state": "configuring", 00:08:30.481 "raid_level": "concat", 00:08:30.481 "superblock": true, 00:08:30.481 "num_base_bdevs": 2, 00:08:30.481 "num_base_bdevs_discovered": 1, 00:08:30.481 "num_base_bdevs_operational": 2, 00:08:30.481 "base_bdevs_list": [ 00:08:30.481 { 00:08:30.481 "name": "BaseBdev1", 00:08:30.481 "uuid": "7ba7e7b7-bb0a-4e41-84ef-1ef6ce110f37", 00:08:30.481 "is_configured": true, 00:08:30.481 "data_offset": 2048, 00:08:30.481 "data_size": 63488 00:08:30.481 }, 00:08:30.481 { 00:08:30.481 "name": "BaseBdev2", 00:08:30.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:30.481 "is_configured": false, 00:08:30.481 "data_offset": 0, 00:08:30.481 "data_size": 0 00:08:30.481 } 00:08:30.481 ] 00:08:30.481 }' 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.481 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.741 [2024-11-10 15:17:36.982111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.741 [2024-11-10 15:17:36.982416] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:30.741 [2024-11-10 15:17:36.982480] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:30.741 BaseBdev2 00:08:30.741 [2024-11-10 15:17:36.982805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:30.741 [2024-11-10 15:17:36.982956] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:30.741 [2024-11-10 15:17:36.983040] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:30.741 [2024-11-10 15:17:36.983241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:30.741 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.742 15:17:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 [ 00:08:30.742 { 00:08:30.742 "name": "BaseBdev2", 00:08:30.742 "aliases": [ 00:08:30.742 "60f40ecc-c78a-41a3-89df-27517e0af65b" 00:08:30.742 ], 00:08:30.742 "product_name": "Malloc disk", 00:08:30.742 "block_size": 512, 00:08:30.742 "num_blocks": 65536, 00:08:30.742 "uuid": "60f40ecc-c78a-41a3-89df-27517e0af65b", 00:08:30.742 "assigned_rate_limits": { 00:08:30.742 "rw_ios_per_sec": 0, 00:08:30.742 "rw_mbytes_per_sec": 0, 00:08:30.742 "r_mbytes_per_sec": 0, 00:08:30.742 "w_mbytes_per_sec": 0 00:08:30.742 }, 00:08:30.742 "claimed": true, 00:08:30.742 "claim_type": "exclusive_write", 00:08:30.742 "zoned": false, 00:08:30.742 "supported_io_types": { 00:08:30.742 "read": true, 00:08:30.742 "write": true, 00:08:30.742 "unmap": true, 00:08:30.742 "flush": true, 00:08:30.742 "reset": true, 00:08:30.742 "nvme_admin": false, 00:08:30.742 "nvme_io": false, 00:08:30.742 "nvme_io_md": false, 00:08:30.742 "write_zeroes": true, 00:08:30.742 "zcopy": true, 00:08:30.742 "get_zone_info": false, 00:08:30.742 "zone_management": false, 00:08:30.742 "zone_append": false, 00:08:30.742 "compare": false, 00:08:30.742 "compare_and_write": false, 00:08:30.742 "abort": true, 00:08:30.742 "seek_hole": false, 00:08:30.742 "seek_data": false, 00:08:30.742 "copy": true, 00:08:30.742 "nvme_iov_md": false 00:08:30.742 }, 00:08:30.742 "memory_domains": [ 00:08:30.742 { 00:08:30.742 "dma_device_id": "system", 00:08:30.742 "dma_device_type": 1 00:08:30.742 }, 00:08:30.742 { 00:08:30.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.742 "dma_device_type": 2 00:08:30.742 } 00:08:30.742 ], 00:08:30.742 "driver_specific": {} 00:08:30.742 } 00:08:30.742 ] 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.742 "name": "Existed_Raid", 00:08:30.742 "uuid": "f840e7a4-0721-45f0-94ff-fe1c3f41ed00", 00:08:30.742 "strip_size_kb": 64, 00:08:30.742 "state": "online", 00:08:30.742 "raid_level": "concat", 00:08:30.742 "superblock": true, 00:08:30.742 "num_base_bdevs": 2, 00:08:30.742 "num_base_bdevs_discovered": 2, 00:08:30.742 "num_base_bdevs_operational": 2, 00:08:30.742 "base_bdevs_list": [ 00:08:30.742 { 00:08:30.742 "name": "BaseBdev1", 00:08:30.742 "uuid": "7ba7e7b7-bb0a-4e41-84ef-1ef6ce110f37", 00:08:30.742 "is_configured": true, 00:08:30.742 "data_offset": 2048, 00:08:30.742 "data_size": 63488 00:08:30.742 }, 00:08:30.742 { 00:08:30.742 "name": "BaseBdev2", 00:08:30.742 "uuid": "60f40ecc-c78a-41a3-89df-27517e0af65b", 00:08:30.742 "is_configured": true, 00:08:30.742 "data_offset": 2048, 00:08:30.742 "data_size": 63488 00:08:30.742 } 00:08:30.742 ] 00:08:30.742 }' 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.742 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:31.337 [2024-11-10 15:17:37.470586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:31.337 "name": "Existed_Raid", 00:08:31.337 "aliases": [ 00:08:31.337 "f840e7a4-0721-45f0-94ff-fe1c3f41ed00" 00:08:31.337 ], 00:08:31.337 "product_name": "Raid Volume", 00:08:31.337 "block_size": 512, 00:08:31.337 "num_blocks": 126976, 00:08:31.337 "uuid": "f840e7a4-0721-45f0-94ff-fe1c3f41ed00", 00:08:31.337 "assigned_rate_limits": { 00:08:31.337 "rw_ios_per_sec": 0, 00:08:31.337 "rw_mbytes_per_sec": 0, 00:08:31.337 "r_mbytes_per_sec": 0, 00:08:31.337 "w_mbytes_per_sec": 0 00:08:31.337 }, 00:08:31.337 "claimed": false, 00:08:31.337 "zoned": false, 00:08:31.337 "supported_io_types": { 00:08:31.337 "read": true, 00:08:31.337 "write": true, 00:08:31.337 "unmap": true, 00:08:31.337 "flush": true, 00:08:31.337 "reset": true, 00:08:31.337 "nvme_admin": false, 00:08:31.337 "nvme_io": false, 00:08:31.337 "nvme_io_md": false, 00:08:31.337 "write_zeroes": true, 00:08:31.337 "zcopy": false, 00:08:31.337 "get_zone_info": false, 00:08:31.337 "zone_management": false, 00:08:31.337 "zone_append": false, 00:08:31.337 "compare": false, 00:08:31.337 "compare_and_write": false, 00:08:31.337 "abort": false, 00:08:31.337 "seek_hole": false, 00:08:31.337 "seek_data": false, 00:08:31.337 "copy": false, 00:08:31.337 "nvme_iov_md": false 00:08:31.337 }, 00:08:31.337 "memory_domains": [ 00:08:31.337 { 00:08:31.337 "dma_device_id": "system", 00:08:31.337 "dma_device_type": 1 00:08:31.337 }, 00:08:31.337 { 00:08:31.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.337 "dma_device_type": 2 00:08:31.337 }, 00:08:31.337 { 00:08:31.337 "dma_device_id": "system", 00:08:31.337 "dma_device_type": 1 00:08:31.337 }, 00:08:31.337 { 00:08:31.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.337 "dma_device_type": 2 00:08:31.337 } 00:08:31.337 ], 00:08:31.337 "driver_specific": { 00:08:31.337 "raid": { 00:08:31.337 "uuid": "f840e7a4-0721-45f0-94ff-fe1c3f41ed00", 00:08:31.337 "strip_size_kb": 64, 00:08:31.337 "state": "online", 00:08:31.337 "raid_level": "concat", 00:08:31.337 "superblock": true, 00:08:31.337 "num_base_bdevs": 2, 00:08:31.337 "num_base_bdevs_discovered": 2, 00:08:31.337 "num_base_bdevs_operational": 2, 00:08:31.337 "base_bdevs_list": [ 00:08:31.337 { 00:08:31.337 "name": "BaseBdev1", 00:08:31.337 "uuid": "7ba7e7b7-bb0a-4e41-84ef-1ef6ce110f37", 00:08:31.337 "is_configured": true, 00:08:31.337 "data_offset": 2048, 00:08:31.337 "data_size": 63488 00:08:31.337 }, 00:08:31.337 { 00:08:31.337 "name": "BaseBdev2", 00:08:31.337 "uuid": "60f40ecc-c78a-41a3-89df-27517e0af65b", 00:08:31.337 "is_configured": true, 00:08:31.337 "data_offset": 2048, 00:08:31.337 "data_size": 63488 00:08:31.337 } 00:08:31.337 ] 00:08:31.337 } 00:08:31.337 } 00:08:31.337 }' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:31.337 BaseBdev2' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.337 [2024-11-10 15:17:37.634444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:31.337 [2024-11-10 15:17:37.634531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:31.337 [2024-11-10 15:17:37.634601] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.337 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.338 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.597 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.597 "name": "Existed_Raid", 00:08:31.597 "uuid": "f840e7a4-0721-45f0-94ff-fe1c3f41ed00", 00:08:31.597 "strip_size_kb": 64, 00:08:31.597 "state": "offline", 00:08:31.597 "raid_level": "concat", 00:08:31.597 "superblock": true, 00:08:31.597 "num_base_bdevs": 2, 00:08:31.597 "num_base_bdevs_discovered": 1, 00:08:31.597 "num_base_bdevs_operational": 1, 00:08:31.597 "base_bdevs_list": [ 00:08:31.597 { 00:08:31.597 "name": null, 00:08:31.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.597 "is_configured": false, 00:08:31.597 "data_offset": 0, 00:08:31.597 "data_size": 63488 00:08:31.597 }, 00:08:31.597 { 00:08:31.597 "name": "BaseBdev2", 00:08:31.597 "uuid": "60f40ecc-c78a-41a3-89df-27517e0af65b", 00:08:31.597 "is_configured": true, 00:08:31.597 "data_offset": 2048, 00:08:31.597 "data_size": 63488 00:08:31.597 } 00:08:31.597 ] 00:08:31.597 }' 00:08:31.597 15:17:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.597 15:17:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.857 [2024-11-10 15:17:38.114385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:31.857 [2024-11-10 15:17:38.114448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:31.857 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74672 00:08:31.858 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 74672 ']' 00:08:31.858 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 74672 00:08:31.858 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:31.858 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:31.858 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74672 00:08:32.160 killing process with pid 74672 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74672' 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 74672 00:08:32.160 [2024-11-10 15:17:38.222821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 74672 00:08:32.160 [2024-11-10 15:17:38.223914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:32.160 00:08:32.160 real 0m3.686s 00:08:32.160 user 0m5.767s 00:08:32.160 sys 0m0.758s 00:08:32.160 ************************************ 00:08:32.160 END TEST raid_state_function_test_sb 00:08:32.160 ************************************ 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.160 15:17:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.160 15:17:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:32.160 15:17:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:32.160 15:17:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.160 15:17:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.160 ************************************ 00:08:32.160 START TEST raid_superblock_test 00:08:32.160 ************************************ 00:08:32.160 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 2 00:08:32.160 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74908 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74908 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 74908 ']' 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.421 15:17:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.421 [2024-11-10 15:17:38.616269] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:32.421 [2024-11-10 15:17:38.616489] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74908 ] 00:08:32.421 [2024-11-10 15:17:38.753346] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.421 [2024-11-10 15:17:38.771498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.681 [2024-11-10 15:17:38.798733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.681 [2024-11-10 15:17:38.843393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.681 [2024-11-10 15:17:38.843437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 malloc1 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 [2024-11-10 15:17:39.480417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.252 [2024-11-10 15:17:39.480578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.252 [2024-11-10 15:17:39.480635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:33.252 [2024-11-10 15:17:39.480701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.252 [2024-11-10 15:17:39.482929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.252 [2024-11-10 15:17:39.483019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.252 pt1 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 malloc2 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 [2024-11-10 15:17:39.509516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.252 [2024-11-10 15:17:39.509579] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.252 [2024-11-10 15:17:39.509601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:33.252 [2024-11-10 15:17:39.509613] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.252 [2024-11-10 15:17:39.511897] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.252 [2024-11-10 15:17:39.511939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.252 pt2 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.252 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.252 [2024-11-10 15:17:39.521570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.252 [2024-11-10 15:17:39.523578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.252 [2024-11-10 15:17:39.523748] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:33.252 [2024-11-10 15:17:39.523774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:33.252 [2024-11-10 15:17:39.524058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:33.252 [2024-11-10 15:17:39.524185] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:33.252 [2024-11-10 15:17:39.524199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:33.253 [2024-11-10 15:17:39.524336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.253 "name": "raid_bdev1", 00:08:33.253 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:33.253 "strip_size_kb": 64, 00:08:33.253 "state": "online", 00:08:33.253 "raid_level": "concat", 00:08:33.253 "superblock": true, 00:08:33.253 "num_base_bdevs": 2, 00:08:33.253 "num_base_bdevs_discovered": 2, 00:08:33.253 "num_base_bdevs_operational": 2, 00:08:33.253 "base_bdevs_list": [ 00:08:33.253 { 00:08:33.253 "name": "pt1", 00:08:33.253 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 }, 00:08:33.253 { 00:08:33.253 "name": "pt2", 00:08:33.253 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.253 "is_configured": true, 00:08:33.253 "data_offset": 2048, 00:08:33.253 "data_size": 63488 00:08:33.253 } 00:08:33.253 ] 00:08:33.253 }' 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.253 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.832 15:17:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:33.832 [2024-11-10 15:17:40.002032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:33.832 "name": "raid_bdev1", 00:08:33.832 "aliases": [ 00:08:33.832 "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a" 00:08:33.832 ], 00:08:33.832 "product_name": "Raid Volume", 00:08:33.832 "block_size": 512, 00:08:33.832 "num_blocks": 126976, 00:08:33.832 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:33.832 "assigned_rate_limits": { 00:08:33.832 "rw_ios_per_sec": 0, 00:08:33.832 "rw_mbytes_per_sec": 0, 00:08:33.832 "r_mbytes_per_sec": 0, 00:08:33.832 "w_mbytes_per_sec": 0 00:08:33.832 }, 00:08:33.832 "claimed": false, 00:08:33.832 "zoned": false, 00:08:33.832 "supported_io_types": { 00:08:33.832 "read": true, 00:08:33.832 "write": true, 00:08:33.832 "unmap": true, 00:08:33.832 "flush": true, 00:08:33.832 "reset": true, 00:08:33.832 "nvme_admin": false, 00:08:33.832 "nvme_io": false, 00:08:33.832 "nvme_io_md": false, 00:08:33.832 "write_zeroes": true, 00:08:33.832 "zcopy": false, 00:08:33.832 "get_zone_info": false, 00:08:33.832 "zone_management": false, 00:08:33.832 "zone_append": false, 00:08:33.832 "compare": false, 00:08:33.832 "compare_and_write": false, 00:08:33.832 "abort": false, 00:08:33.832 "seek_hole": false, 00:08:33.832 "seek_data": false, 00:08:33.832 "copy": false, 00:08:33.832 "nvme_iov_md": false 00:08:33.832 }, 00:08:33.832 "memory_domains": [ 00:08:33.832 { 00:08:33.832 "dma_device_id": "system", 00:08:33.832 "dma_device_type": 1 00:08:33.832 }, 00:08:33.832 { 00:08:33.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.832 "dma_device_type": 2 00:08:33.832 }, 00:08:33.832 { 00:08:33.832 "dma_device_id": "system", 00:08:33.832 "dma_device_type": 1 00:08:33.832 }, 00:08:33.832 { 00:08:33.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.832 "dma_device_type": 2 00:08:33.832 } 00:08:33.832 ], 00:08:33.832 "driver_specific": { 00:08:33.832 "raid": { 00:08:33.832 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:33.832 "strip_size_kb": 64, 00:08:33.832 "state": "online", 00:08:33.832 "raid_level": "concat", 00:08:33.832 "superblock": true, 00:08:33.832 "num_base_bdevs": 2, 00:08:33.832 "num_base_bdevs_discovered": 2, 00:08:33.832 "num_base_bdevs_operational": 2, 00:08:33.832 "base_bdevs_list": [ 00:08:33.832 { 00:08:33.832 "name": "pt1", 00:08:33.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.832 "is_configured": true, 00:08:33.832 "data_offset": 2048, 00:08:33.832 "data_size": 63488 00:08:33.832 }, 00:08:33.832 { 00:08:33.832 "name": "pt2", 00:08:33.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.832 "is_configured": true, 00:08:33.832 "data_offset": 2048, 00:08:33.832 "data_size": 63488 00:08:33.832 } 00:08:33.832 ] 00:08:33.832 } 00:08:33.832 } 00:08:33.832 }' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:33.832 pt2' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.832 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 [2024-11-10 15:17:40.209948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a ']' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 [2024-11-10 15:17:40.237714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.093 [2024-11-10 15:17:40.237799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.093 [2024-11-10 15:17:40.237947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.093 [2024-11-10 15:17:40.238062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.093 [2024-11-10 15:17:40.238140] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 [2024-11-10 15:17:40.357789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:34.093 [2024-11-10 15:17:40.359721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:34.093 [2024-11-10 15:17:40.359798] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:34.093 [2024-11-10 15:17:40.359865] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:34.093 [2024-11-10 15:17:40.359885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.093 [2024-11-10 15:17:40.359906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:34.093 request: 00:08:34.093 { 00:08:34.093 "name": "raid_bdev1", 00:08:34.093 "raid_level": "concat", 00:08:34.093 "base_bdevs": [ 00:08:34.093 "malloc1", 00:08:34.093 "malloc2" 00:08:34.093 ], 00:08:34.093 "strip_size_kb": 64, 00:08:34.093 "superblock": false, 00:08:34.093 "method": "bdev_raid_create", 00:08:34.093 "req_id": 1 00:08:34.093 } 00:08:34.093 Got JSON-RPC error response 00:08:34.093 response: 00:08:34.093 { 00:08:34.093 "code": -17, 00:08:34.093 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:34.093 } 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.093 [2024-11-10 15:17:40.421772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:34.093 [2024-11-10 15:17:40.421880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.093 [2024-11-10 15:17:40.421919] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:34.093 [2024-11-10 15:17:40.421958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.093 [2024-11-10 15:17:40.424269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.093 [2024-11-10 15:17:40.424359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:34.093 [2024-11-10 15:17:40.424468] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:34.093 [2024-11-10 15:17:40.424554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:34.093 pt1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.093 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.094 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.353 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.353 "name": "raid_bdev1", 00:08:34.353 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:34.353 "strip_size_kb": 64, 00:08:34.353 "state": "configuring", 00:08:34.354 "raid_level": "concat", 00:08:34.354 "superblock": true, 00:08:34.354 "num_base_bdevs": 2, 00:08:34.354 "num_base_bdevs_discovered": 1, 00:08:34.354 "num_base_bdevs_operational": 2, 00:08:34.354 "base_bdevs_list": [ 00:08:34.354 { 00:08:34.354 "name": "pt1", 00:08:34.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.354 "is_configured": true, 00:08:34.354 "data_offset": 2048, 00:08:34.354 "data_size": 63488 00:08:34.354 }, 00:08:34.354 { 00:08:34.354 "name": null, 00:08:34.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.354 "is_configured": false, 00:08:34.354 "data_offset": 2048, 00:08:34.354 "data_size": 63488 00:08:34.354 } 00:08:34.354 ] 00:08:34.354 }' 00:08:34.354 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.354 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.614 [2024-11-10 15:17:40.845911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.614 [2024-11-10 15:17:40.846082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.614 [2024-11-10 15:17:40.846117] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:34.614 [2024-11-10 15:17:40.846133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.614 [2024-11-10 15:17:40.846596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.614 [2024-11-10 15:17:40.846619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.614 [2024-11-10 15:17:40.846705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.614 [2024-11-10 15:17:40.846731] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.614 [2024-11-10 15:17:40.846824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:34.614 [2024-11-10 15:17:40.846837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:34.614 [2024-11-10 15:17:40.847092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:34.614 [2024-11-10 15:17:40.847243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:34.614 [2024-11-10 15:17:40.847254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:34.614 [2024-11-10 15:17:40.847374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.614 pt2 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.614 "name": "raid_bdev1", 00:08:34.614 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:34.614 "strip_size_kb": 64, 00:08:34.614 "state": "online", 00:08:34.614 "raid_level": "concat", 00:08:34.614 "superblock": true, 00:08:34.614 "num_base_bdevs": 2, 00:08:34.614 "num_base_bdevs_discovered": 2, 00:08:34.614 "num_base_bdevs_operational": 2, 00:08:34.614 "base_bdevs_list": [ 00:08:34.614 { 00:08:34.614 "name": "pt1", 00:08:34.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.614 "is_configured": true, 00:08:34.614 "data_offset": 2048, 00:08:34.614 "data_size": 63488 00:08:34.614 }, 00:08:34.614 { 00:08:34.614 "name": "pt2", 00:08:34.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.614 "is_configured": true, 00:08:34.614 "data_offset": 2048, 00:08:34.614 "data_size": 63488 00:08:34.614 } 00:08:34.614 ] 00:08:34.614 }' 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.614 15:17:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.184 [2024-11-10 15:17:41.298332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.184 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:35.184 "name": "raid_bdev1", 00:08:35.184 "aliases": [ 00:08:35.184 "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a" 00:08:35.184 ], 00:08:35.184 "product_name": "Raid Volume", 00:08:35.184 "block_size": 512, 00:08:35.184 "num_blocks": 126976, 00:08:35.184 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:35.184 "assigned_rate_limits": { 00:08:35.184 "rw_ios_per_sec": 0, 00:08:35.184 "rw_mbytes_per_sec": 0, 00:08:35.184 "r_mbytes_per_sec": 0, 00:08:35.184 "w_mbytes_per_sec": 0 00:08:35.184 }, 00:08:35.184 "claimed": false, 00:08:35.184 "zoned": false, 00:08:35.184 "supported_io_types": { 00:08:35.184 "read": true, 00:08:35.184 "write": true, 00:08:35.184 "unmap": true, 00:08:35.184 "flush": true, 00:08:35.184 "reset": true, 00:08:35.184 "nvme_admin": false, 00:08:35.184 "nvme_io": false, 00:08:35.184 "nvme_io_md": false, 00:08:35.184 "write_zeroes": true, 00:08:35.184 "zcopy": false, 00:08:35.184 "get_zone_info": false, 00:08:35.184 "zone_management": false, 00:08:35.184 "zone_append": false, 00:08:35.184 "compare": false, 00:08:35.184 "compare_and_write": false, 00:08:35.184 "abort": false, 00:08:35.184 "seek_hole": false, 00:08:35.184 "seek_data": false, 00:08:35.184 "copy": false, 00:08:35.184 "nvme_iov_md": false 00:08:35.184 }, 00:08:35.184 "memory_domains": [ 00:08:35.184 { 00:08:35.184 "dma_device_id": "system", 00:08:35.184 "dma_device_type": 1 00:08:35.184 }, 00:08:35.184 { 00:08:35.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.184 "dma_device_type": 2 00:08:35.184 }, 00:08:35.184 { 00:08:35.184 "dma_device_id": "system", 00:08:35.184 "dma_device_type": 1 00:08:35.184 }, 00:08:35.184 { 00:08:35.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.184 "dma_device_type": 2 00:08:35.184 } 00:08:35.184 ], 00:08:35.184 "driver_specific": { 00:08:35.184 "raid": { 00:08:35.184 "uuid": "35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a", 00:08:35.185 "strip_size_kb": 64, 00:08:35.185 "state": "online", 00:08:35.185 "raid_level": "concat", 00:08:35.185 "superblock": true, 00:08:35.185 "num_base_bdevs": 2, 00:08:35.185 "num_base_bdevs_discovered": 2, 00:08:35.185 "num_base_bdevs_operational": 2, 00:08:35.185 "base_bdevs_list": [ 00:08:35.185 { 00:08:35.185 "name": "pt1", 00:08:35.185 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:35.185 "is_configured": true, 00:08:35.185 "data_offset": 2048, 00:08:35.185 "data_size": 63488 00:08:35.185 }, 00:08:35.185 { 00:08:35.185 "name": "pt2", 00:08:35.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.185 "is_configured": true, 00:08:35.185 "data_offset": 2048, 00:08:35.185 "data_size": 63488 00:08:35.185 } 00:08:35.185 ] 00:08:35.185 } 00:08:35.185 } 00:08:35.185 }' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:35.185 pt2' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:35.185 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.445 [2024-11-10 15:17:41.550328] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a '!=' 35af0c19-0bfe-487d-b2e1-6fabfd2d7c2a ']' 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74908 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 74908 ']' 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 74908 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74908 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74908' 00:08:35.445 killing process with pid 74908 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 74908 00:08:35.445 [2024-11-10 15:17:41.626881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.445 [2024-11-10 15:17:41.627066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.445 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 74908 00:08:35.445 [2024-11-10 15:17:41.627164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.445 [2024-11-10 15:17:41.627183] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:35.445 [2024-11-10 15:17:41.650585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.705 15:17:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:35.705 00:08:35.705 real 0m3.354s 00:08:35.705 user 0m5.171s 00:08:35.705 sys 0m0.756s 00:08:35.705 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:35.705 15:17:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.705 ************************************ 00:08:35.705 END TEST raid_superblock_test 00:08:35.705 ************************************ 00:08:35.705 15:17:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:35.705 15:17:41 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:35.705 15:17:41 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:35.705 15:17:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.705 ************************************ 00:08:35.705 START TEST raid_read_error_test 00:08:35.705 ************************************ 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 read 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.i4e0K9fssx 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75108 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75108 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 75108 ']' 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:35.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:35.705 15:17:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.705 [2024-11-10 15:17:42.052786] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:35.705 [2024-11-10 15:17:42.052925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75108 ] 00:08:35.965 [2024-11-10 15:17:42.185725] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:35.965 [2024-11-10 15:17:42.204160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.965 [2024-11-10 15:17:42.229908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.965 [2024-11-10 15:17:42.274607] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.965 [2024-11-10 15:17:42.274667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.904 BaseBdev1_malloc 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.904 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 true 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 [2024-11-10 15:17:42.935714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.905 [2024-11-10 15:17:42.935798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.905 [2024-11-10 15:17:42.935822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.905 [2024-11-10 15:17:42.935840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.905 [2024-11-10 15:17:42.938277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.905 [2024-11-10 15:17:42.938387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.905 BaseBdev1 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 BaseBdev2_malloc 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 true 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 [2024-11-10 15:17:42.977137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.905 [2024-11-10 15:17:42.977195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.905 [2024-11-10 15:17:42.977213] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.905 [2024-11-10 15:17:42.977225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.905 [2024-11-10 15:17:42.979495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.905 [2024-11-10 15:17:42.979544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.905 BaseBdev2 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 [2024-11-10 15:17:42.989165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.905 [2024-11-10 15:17:42.991101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.905 [2024-11-10 15:17:42.991305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:36.905 [2024-11-10 15:17:42.991329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:36.905 [2024-11-10 15:17:42.991587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:36.905 [2024-11-10 15:17:42.991743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:36.905 [2024-11-10 15:17:42.991760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:36.905 [2024-11-10 15:17:42.991922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.905 15:17:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.905 "name": "raid_bdev1", 00:08:36.905 "uuid": "cac45f58-f075-4cd3-8e4f-efc6154ac171", 00:08:36.905 "strip_size_kb": 64, 00:08:36.905 "state": "online", 00:08:36.905 "raid_level": "concat", 00:08:36.905 "superblock": true, 00:08:36.905 "num_base_bdevs": 2, 00:08:36.905 "num_base_bdevs_discovered": 2, 00:08:36.905 "num_base_bdevs_operational": 2, 00:08:36.905 "base_bdevs_list": [ 00:08:36.905 { 00:08:36.905 "name": "BaseBdev1", 00:08:36.905 "uuid": "7aac795a-f709-5f4f-8a51-39a24c760285", 00:08:36.905 "is_configured": true, 00:08:36.905 "data_offset": 2048, 00:08:36.905 "data_size": 63488 00:08:36.905 }, 00:08:36.905 { 00:08:36.905 "name": "BaseBdev2", 00:08:36.905 "uuid": "08a1dae3-b9b1-513e-b35d-914257ecbddc", 00:08:36.905 "is_configured": true, 00:08:36.905 "data_offset": 2048, 00:08:36.905 "data_size": 63488 00:08:36.905 } 00:08:36.905 ] 00:08:36.905 }' 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.905 15:17:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.165 15:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:37.165 15:17:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:37.165 [2024-11-10 15:17:43.501703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.103 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.363 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.363 "name": "raid_bdev1", 00:08:38.363 "uuid": "cac45f58-f075-4cd3-8e4f-efc6154ac171", 00:08:38.363 "strip_size_kb": 64, 00:08:38.363 "state": "online", 00:08:38.363 "raid_level": "concat", 00:08:38.363 "superblock": true, 00:08:38.363 "num_base_bdevs": 2, 00:08:38.363 "num_base_bdevs_discovered": 2, 00:08:38.363 "num_base_bdevs_operational": 2, 00:08:38.363 "base_bdevs_list": [ 00:08:38.363 { 00:08:38.363 "name": "BaseBdev1", 00:08:38.363 "uuid": "7aac795a-f709-5f4f-8a51-39a24c760285", 00:08:38.363 "is_configured": true, 00:08:38.363 "data_offset": 2048, 00:08:38.363 "data_size": 63488 00:08:38.363 }, 00:08:38.363 { 00:08:38.363 "name": "BaseBdev2", 00:08:38.363 "uuid": "08a1dae3-b9b1-513e-b35d-914257ecbddc", 00:08:38.363 "is_configured": true, 00:08:38.363 "data_offset": 2048, 00:08:38.363 "data_size": 63488 00:08:38.363 } 00:08:38.363 ] 00:08:38.363 }' 00:08:38.363 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.363 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.623 [2024-11-10 15:17:44.884727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.623 [2024-11-10 15:17:44.884837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.623 [2024-11-10 15:17:44.887538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.623 [2024-11-10 15:17:44.887644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.623 [2024-11-10 15:17:44.887706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.623 [2024-11-10 15:17:44.887808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:38.623 { 00:08:38.623 "results": [ 00:08:38.623 { 00:08:38.623 "job": "raid_bdev1", 00:08:38.623 "core_mask": "0x1", 00:08:38.623 "workload": "randrw", 00:08:38.623 "percentage": 50, 00:08:38.623 "status": "finished", 00:08:38.623 "queue_depth": 1, 00:08:38.623 "io_size": 131072, 00:08:38.623 "runtime": 1.381101, 00:08:38.623 "iops": 15818.538977236278, 00:08:38.623 "mibps": 1977.3173721545347, 00:08:38.623 "io_failed": 1, 00:08:38.623 "io_timeout": 0, 00:08:38.623 "avg_latency_us": 87.30639426851695, 00:08:38.623 "min_latency_us": 26.775908655103287, 00:08:38.623 "max_latency_us": 1692.2374270025277 00:08:38.623 } 00:08:38.623 ], 00:08:38.623 "core_count": 1 00:08:38.623 } 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75108 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 75108 ']' 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 75108 00:08:38.623 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75108 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75108' 00:08:38.624 killing process with pid 75108 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 75108 00:08:38.624 [2024-11-10 15:17:44.926464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.624 15:17:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 75108 00:08:38.624 [2024-11-10 15:17:44.942152] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.i4e0K9fssx 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:38.885 00:08:38.885 real 0m3.225s 00:08:38.885 user 0m4.081s 00:08:38.885 sys 0m0.542s 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:38.885 ************************************ 00:08:38.885 END TEST raid_read_error_test 00:08:38.885 ************************************ 00:08:38.885 15:17:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.885 15:17:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:38.885 15:17:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:38.885 15:17:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:38.885 15:17:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.885 ************************************ 00:08:38.885 START TEST raid_write_error_test 00:08:38.885 ************************************ 00:08:38.885 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 2 write 00:08:38.885 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:38.885 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:38.885 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9XpjtElfcP 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75237 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75237 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:39.146 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 75237 ']' 00:08:39.147 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.147 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:39.147 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.147 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:39.147 15:17:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.147 [2024-11-10 15:17:45.350473] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:39.147 [2024-11-10 15:17:45.350608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75237 ] 00:08:39.147 [2024-11-10 15:17:45.486059] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:39.406 [2024-11-10 15:17:45.525407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.406 [2024-11-10 15:17:45.552033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.406 [2024-11-10 15:17:45.596923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.406 [2024-11-10 15:17:45.596968] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.975 BaseBdev1_malloc 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.975 true 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.975 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.975 [2024-11-10 15:17:46.241841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.975 [2024-11-10 15:17:46.241940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.975 [2024-11-10 15:17:46.241965] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.975 [2024-11-10 15:17:46.241981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.975 [2024-11-10 15:17:46.244343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.975 [2024-11-10 15:17:46.244476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.975 BaseBdev1 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.976 BaseBdev2_malloc 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.976 true 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.976 [2024-11-10 15:17:46.282893] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.976 [2024-11-10 15:17:46.282957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.976 [2024-11-10 15:17:46.282976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.976 [2024-11-10 15:17:46.282989] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.976 [2024-11-10 15:17:46.285378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.976 [2024-11-10 15:17:46.285425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.976 BaseBdev2 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.976 [2024-11-10 15:17:46.294913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.976 [2024-11-10 15:17:46.297071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.976 [2024-11-10 15:17:46.297252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:39.976 [2024-11-10 15:17:46.297269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:39.976 [2024-11-10 15:17:46.297540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:39.976 [2024-11-10 15:17:46.297705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:39.976 [2024-11-10 15:17:46.297722] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:39.976 [2024-11-10 15:17:46.297875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.976 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.236 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.236 "name": "raid_bdev1", 00:08:40.236 "uuid": "b1661b02-42d0-4470-be0b-17e82c287dc2", 00:08:40.236 "strip_size_kb": 64, 00:08:40.236 "state": "online", 00:08:40.236 "raid_level": "concat", 00:08:40.236 "superblock": true, 00:08:40.236 "num_base_bdevs": 2, 00:08:40.236 "num_base_bdevs_discovered": 2, 00:08:40.236 "num_base_bdevs_operational": 2, 00:08:40.236 "base_bdevs_list": [ 00:08:40.236 { 00:08:40.236 "name": "BaseBdev1", 00:08:40.236 "uuid": "2daad3f3-96ad-5bcb-9b22-4081e6f43f91", 00:08:40.236 "is_configured": true, 00:08:40.236 "data_offset": 2048, 00:08:40.236 "data_size": 63488 00:08:40.236 }, 00:08:40.236 { 00:08:40.236 "name": "BaseBdev2", 00:08:40.236 "uuid": "efe04426-4d33-5631-bd3b-6e67732e30d2", 00:08:40.236 "is_configured": true, 00:08:40.236 "data_offset": 2048, 00:08:40.236 "data_size": 63488 00:08:40.236 } 00:08:40.236 ] 00:08:40.236 }' 00:08:40.236 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.236 15:17:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.495 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:40.495 15:17:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:40.495 [2024-11-10 15:17:46.839515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.435 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.695 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.695 "name": "raid_bdev1", 00:08:41.695 "uuid": "b1661b02-42d0-4470-be0b-17e82c287dc2", 00:08:41.695 "strip_size_kb": 64, 00:08:41.695 "state": "online", 00:08:41.695 "raid_level": "concat", 00:08:41.695 "superblock": true, 00:08:41.695 "num_base_bdevs": 2, 00:08:41.695 "num_base_bdevs_discovered": 2, 00:08:41.695 "num_base_bdevs_operational": 2, 00:08:41.695 "base_bdevs_list": [ 00:08:41.695 { 00:08:41.695 "name": "BaseBdev1", 00:08:41.695 "uuid": "2daad3f3-96ad-5bcb-9b22-4081e6f43f91", 00:08:41.695 "is_configured": true, 00:08:41.695 "data_offset": 2048, 00:08:41.695 "data_size": 63488 00:08:41.695 }, 00:08:41.695 { 00:08:41.695 "name": "BaseBdev2", 00:08:41.695 "uuid": "efe04426-4d33-5631-bd3b-6e67732e30d2", 00:08:41.695 "is_configured": true, 00:08:41.695 "data_offset": 2048, 00:08:41.695 "data_size": 63488 00:08:41.695 } 00:08:41.695 ] 00:08:41.695 }' 00:08:41.695 15:17:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.695 15:17:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.955 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.955 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.955 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.955 [2024-11-10 15:17:48.170081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.955 [2024-11-10 15:17:48.170186] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.955 [2024-11-10 15:17:48.172745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.955 [2024-11-10 15:17:48.172862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.955 [2024-11-10 15:17:48.172921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.955 [2024-11-10 15:17:48.172992] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:41.955 { 00:08:41.956 "results": [ 00:08:41.956 { 00:08:41.956 "job": "raid_bdev1", 00:08:41.956 "core_mask": "0x1", 00:08:41.956 "workload": "randrw", 00:08:41.956 "percentage": 50, 00:08:41.956 "status": "finished", 00:08:41.956 "queue_depth": 1, 00:08:41.956 "io_size": 131072, 00:08:41.956 "runtime": 1.328619, 00:08:41.956 "iops": 16885.201852449798, 00:08:41.956 "mibps": 2110.6502315562248, 00:08:41.956 "io_failed": 1, 00:08:41.956 "io_timeout": 0, 00:08:41.956 "avg_latency_us": 81.94439778388038, 00:08:41.956 "min_latency_us": 25.21398065022226, 00:08:41.956 "max_latency_us": 1335.2253116011505 00:08:41.956 } 00:08:41.956 ], 00:08:41.956 "core_count": 1 00:08:41.956 } 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75237 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 75237 ']' 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 75237 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75237 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75237' 00:08:41.956 killing process with pid 75237 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 75237 00:08:41.956 [2024-11-10 15:17:48.212558] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.956 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 75237 00:08:41.956 [2024-11-10 15:17:48.228454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9XpjtElfcP 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:08:42.216 ************************************ 00:08:42.216 END TEST raid_write_error_test 00:08:42.216 ************************************ 00:08:42.216 00:08:42.216 real 0m3.208s 00:08:42.216 user 0m4.084s 00:08:42.216 sys 0m0.526s 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:42.216 15:17:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.216 15:17:48 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:42.216 15:17:48 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:42.216 15:17:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:42.216 15:17:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:42.216 15:17:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:42.216 ************************************ 00:08:42.216 START TEST raid_state_function_test 00:08:42.216 ************************************ 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 false 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:42.216 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=75370 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75370' 00:08:42.217 Process raid pid: 75370 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 75370 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 75370 ']' 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:42.217 15:17:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.477 [2024-11-10 15:17:48.621251] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:42.477 [2024-11-10 15:17:48.621478] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.477 [2024-11-10 15:17:48.754884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:42.477 [2024-11-10 15:17:48.794128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.477 [2024-11-10 15:17:48.821170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.736 [2024-11-10 15:17:48.865252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.736 [2024-11-10 15:17:48.865289] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.305 [2024-11-10 15:17:49.444544] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.305 [2024-11-10 15:17:49.444614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.305 [2024-11-10 15:17:49.444631] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.305 [2024-11-10 15:17:49.444642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.305 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.306 "name": "Existed_Raid", 00:08:43.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.306 "strip_size_kb": 0, 00:08:43.306 "state": "configuring", 00:08:43.306 "raid_level": "raid1", 00:08:43.306 "superblock": false, 00:08:43.306 "num_base_bdevs": 2, 00:08:43.306 "num_base_bdevs_discovered": 0, 00:08:43.306 "num_base_bdevs_operational": 2, 00:08:43.306 "base_bdevs_list": [ 00:08:43.306 { 00:08:43.306 "name": "BaseBdev1", 00:08:43.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.306 "is_configured": false, 00:08:43.306 "data_offset": 0, 00:08:43.306 "data_size": 0 00:08:43.306 }, 00:08:43.306 { 00:08:43.306 "name": "BaseBdev2", 00:08:43.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.306 "is_configured": false, 00:08:43.306 "data_offset": 0, 00:08:43.306 "data_size": 0 00:08:43.306 } 00:08:43.306 ] 00:08:43.306 }' 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.306 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.566 [2024-11-10 15:17:49.872583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.566 [2024-11-10 15:17:49.872691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.566 [2024-11-10 15:17:49.884580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:43.566 [2024-11-10 15:17:49.884687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:43.566 [2024-11-10 15:17:49.884726] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.566 [2024-11-10 15:17:49.884753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.566 [2024-11-10 15:17:49.905785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.566 BaseBdev1 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.566 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.825 [ 00:08:43.825 { 00:08:43.825 "name": "BaseBdev1", 00:08:43.825 "aliases": [ 00:08:43.825 "da14206b-514a-485c-904b-c7619c2a99d7" 00:08:43.825 ], 00:08:43.825 "product_name": "Malloc disk", 00:08:43.825 "block_size": 512, 00:08:43.825 "num_blocks": 65536, 00:08:43.825 "uuid": "da14206b-514a-485c-904b-c7619c2a99d7", 00:08:43.825 "assigned_rate_limits": { 00:08:43.825 "rw_ios_per_sec": 0, 00:08:43.825 "rw_mbytes_per_sec": 0, 00:08:43.825 "r_mbytes_per_sec": 0, 00:08:43.825 "w_mbytes_per_sec": 0 00:08:43.825 }, 00:08:43.825 "claimed": true, 00:08:43.825 "claim_type": "exclusive_write", 00:08:43.825 "zoned": false, 00:08:43.825 "supported_io_types": { 00:08:43.825 "read": true, 00:08:43.825 "write": true, 00:08:43.825 "unmap": true, 00:08:43.825 "flush": true, 00:08:43.825 "reset": true, 00:08:43.825 "nvme_admin": false, 00:08:43.825 "nvme_io": false, 00:08:43.825 "nvme_io_md": false, 00:08:43.825 "write_zeroes": true, 00:08:43.825 "zcopy": true, 00:08:43.825 "get_zone_info": false, 00:08:43.825 "zone_management": false, 00:08:43.825 "zone_append": false, 00:08:43.825 "compare": false, 00:08:43.825 "compare_and_write": false, 00:08:43.825 "abort": true, 00:08:43.825 "seek_hole": false, 00:08:43.825 "seek_data": false, 00:08:43.825 "copy": true, 00:08:43.825 "nvme_iov_md": false 00:08:43.825 }, 00:08:43.825 "memory_domains": [ 00:08:43.825 { 00:08:43.825 "dma_device_id": "system", 00:08:43.825 "dma_device_type": 1 00:08:43.825 }, 00:08:43.825 { 00:08:43.825 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.825 "dma_device_type": 2 00:08:43.825 } 00:08:43.825 ], 00:08:43.825 "driver_specific": {} 00:08:43.825 } 00:08:43.825 ] 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.825 "name": "Existed_Raid", 00:08:43.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.825 "strip_size_kb": 0, 00:08:43.825 "state": "configuring", 00:08:43.825 "raid_level": "raid1", 00:08:43.825 "superblock": false, 00:08:43.825 "num_base_bdevs": 2, 00:08:43.825 "num_base_bdevs_discovered": 1, 00:08:43.825 "num_base_bdevs_operational": 2, 00:08:43.825 "base_bdevs_list": [ 00:08:43.825 { 00:08:43.825 "name": "BaseBdev1", 00:08:43.825 "uuid": "da14206b-514a-485c-904b-c7619c2a99d7", 00:08:43.825 "is_configured": true, 00:08:43.825 "data_offset": 0, 00:08:43.825 "data_size": 65536 00:08:43.825 }, 00:08:43.825 { 00:08:43.825 "name": "BaseBdev2", 00:08:43.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.825 "is_configured": false, 00:08:43.825 "data_offset": 0, 00:08:43.825 "data_size": 0 00:08:43.825 } 00:08:43.825 ] 00:08:43.825 }' 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.825 15:17:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.084 [2024-11-10 15:17:50.385958] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.084 [2024-11-10 15:17:50.386107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.084 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.084 [2024-11-10 15:17:50.397971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.085 [2024-11-10 15:17:50.400172] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.085 [2024-11-10 15:17:50.400268] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.085 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.344 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.344 "name": "Existed_Raid", 00:08:44.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.344 "strip_size_kb": 0, 00:08:44.344 "state": "configuring", 00:08:44.344 "raid_level": "raid1", 00:08:44.344 "superblock": false, 00:08:44.344 "num_base_bdevs": 2, 00:08:44.344 "num_base_bdevs_discovered": 1, 00:08:44.344 "num_base_bdevs_operational": 2, 00:08:44.344 "base_bdevs_list": [ 00:08:44.344 { 00:08:44.344 "name": "BaseBdev1", 00:08:44.344 "uuid": "da14206b-514a-485c-904b-c7619c2a99d7", 00:08:44.344 "is_configured": true, 00:08:44.344 "data_offset": 0, 00:08:44.344 "data_size": 65536 00:08:44.344 }, 00:08:44.344 { 00:08:44.344 "name": "BaseBdev2", 00:08:44.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.344 "is_configured": false, 00:08:44.344 "data_offset": 0, 00:08:44.344 "data_size": 0 00:08:44.344 } 00:08:44.344 ] 00:08:44.344 }' 00:08:44.344 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.344 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.604 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:44.604 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.604 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.604 [2024-11-10 15:17:50.893467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:44.604 [2024-11-10 15:17:50.893532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:44.605 [2024-11-10 15:17:50.893547] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:44.605 [2024-11-10 15:17:50.893832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:44.605 [2024-11-10 15:17:50.894042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:44.605 [2024-11-10 15:17:50.894061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:44.605 [2024-11-10 15:17:50.894313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.605 BaseBdev2 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.605 [ 00:08:44.605 { 00:08:44.605 "name": "BaseBdev2", 00:08:44.605 "aliases": [ 00:08:44.605 "7b1c2433-fedb-4ac0-aa2f-6a5a3fb7aff6" 00:08:44.605 ], 00:08:44.605 "product_name": "Malloc disk", 00:08:44.605 "block_size": 512, 00:08:44.605 "num_blocks": 65536, 00:08:44.605 "uuid": "7b1c2433-fedb-4ac0-aa2f-6a5a3fb7aff6", 00:08:44.605 "assigned_rate_limits": { 00:08:44.605 "rw_ios_per_sec": 0, 00:08:44.605 "rw_mbytes_per_sec": 0, 00:08:44.605 "r_mbytes_per_sec": 0, 00:08:44.605 "w_mbytes_per_sec": 0 00:08:44.605 }, 00:08:44.605 "claimed": true, 00:08:44.605 "claim_type": "exclusive_write", 00:08:44.605 "zoned": false, 00:08:44.605 "supported_io_types": { 00:08:44.605 "read": true, 00:08:44.605 "write": true, 00:08:44.605 "unmap": true, 00:08:44.605 "flush": true, 00:08:44.605 "reset": true, 00:08:44.605 "nvme_admin": false, 00:08:44.605 "nvme_io": false, 00:08:44.605 "nvme_io_md": false, 00:08:44.605 "write_zeroes": true, 00:08:44.605 "zcopy": true, 00:08:44.605 "get_zone_info": false, 00:08:44.605 "zone_management": false, 00:08:44.605 "zone_append": false, 00:08:44.605 "compare": false, 00:08:44.605 "compare_and_write": false, 00:08:44.605 "abort": true, 00:08:44.605 "seek_hole": false, 00:08:44.605 "seek_data": false, 00:08:44.605 "copy": true, 00:08:44.605 "nvme_iov_md": false 00:08:44.605 }, 00:08:44.605 "memory_domains": [ 00:08:44.605 { 00:08:44.605 "dma_device_id": "system", 00:08:44.605 "dma_device_type": 1 00:08:44.605 }, 00:08:44.605 { 00:08:44.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.605 "dma_device_type": 2 00:08:44.605 } 00:08:44.605 ], 00:08:44.605 "driver_specific": {} 00:08:44.605 } 00:08:44.605 ] 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.605 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:44.865 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.865 "name": "Existed_Raid", 00:08:44.865 "uuid": "42d0e347-2615-4736-9618-8e4fb5d983f5", 00:08:44.865 "strip_size_kb": 0, 00:08:44.865 "state": "online", 00:08:44.865 "raid_level": "raid1", 00:08:44.865 "superblock": false, 00:08:44.865 "num_base_bdevs": 2, 00:08:44.865 "num_base_bdevs_discovered": 2, 00:08:44.865 "num_base_bdevs_operational": 2, 00:08:44.865 "base_bdevs_list": [ 00:08:44.865 { 00:08:44.865 "name": "BaseBdev1", 00:08:44.865 "uuid": "da14206b-514a-485c-904b-c7619c2a99d7", 00:08:44.865 "is_configured": true, 00:08:44.865 "data_offset": 0, 00:08:44.865 "data_size": 65536 00:08:44.865 }, 00:08:44.865 { 00:08:44.865 "name": "BaseBdev2", 00:08:44.865 "uuid": "7b1c2433-fedb-4ac0-aa2f-6a5a3fb7aff6", 00:08:44.865 "is_configured": true, 00:08:44.865 "data_offset": 0, 00:08:44.865 "data_size": 65536 00:08:44.865 } 00:08:44.865 ] 00:08:44.865 }' 00:08:44.865 15:17:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.865 15:17:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:45.125 [2024-11-10 15:17:51.349942] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:45.125 "name": "Existed_Raid", 00:08:45.125 "aliases": [ 00:08:45.125 "42d0e347-2615-4736-9618-8e4fb5d983f5" 00:08:45.125 ], 00:08:45.125 "product_name": "Raid Volume", 00:08:45.125 "block_size": 512, 00:08:45.125 "num_blocks": 65536, 00:08:45.125 "uuid": "42d0e347-2615-4736-9618-8e4fb5d983f5", 00:08:45.125 "assigned_rate_limits": { 00:08:45.125 "rw_ios_per_sec": 0, 00:08:45.125 "rw_mbytes_per_sec": 0, 00:08:45.125 "r_mbytes_per_sec": 0, 00:08:45.125 "w_mbytes_per_sec": 0 00:08:45.125 }, 00:08:45.125 "claimed": false, 00:08:45.125 "zoned": false, 00:08:45.125 "supported_io_types": { 00:08:45.125 "read": true, 00:08:45.125 "write": true, 00:08:45.125 "unmap": false, 00:08:45.125 "flush": false, 00:08:45.125 "reset": true, 00:08:45.125 "nvme_admin": false, 00:08:45.125 "nvme_io": false, 00:08:45.125 "nvme_io_md": false, 00:08:45.125 "write_zeroes": true, 00:08:45.125 "zcopy": false, 00:08:45.125 "get_zone_info": false, 00:08:45.125 "zone_management": false, 00:08:45.125 "zone_append": false, 00:08:45.125 "compare": false, 00:08:45.125 "compare_and_write": false, 00:08:45.125 "abort": false, 00:08:45.125 "seek_hole": false, 00:08:45.125 "seek_data": false, 00:08:45.125 "copy": false, 00:08:45.125 "nvme_iov_md": false 00:08:45.125 }, 00:08:45.125 "memory_domains": [ 00:08:45.125 { 00:08:45.125 "dma_device_id": "system", 00:08:45.125 "dma_device_type": 1 00:08:45.125 }, 00:08:45.125 { 00:08:45.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.125 "dma_device_type": 2 00:08:45.125 }, 00:08:45.125 { 00:08:45.125 "dma_device_id": "system", 00:08:45.125 "dma_device_type": 1 00:08:45.125 }, 00:08:45.125 { 00:08:45.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.125 "dma_device_type": 2 00:08:45.125 } 00:08:45.125 ], 00:08:45.125 "driver_specific": { 00:08:45.125 "raid": { 00:08:45.125 "uuid": "42d0e347-2615-4736-9618-8e4fb5d983f5", 00:08:45.125 "strip_size_kb": 0, 00:08:45.125 "state": "online", 00:08:45.125 "raid_level": "raid1", 00:08:45.125 "superblock": false, 00:08:45.125 "num_base_bdevs": 2, 00:08:45.125 "num_base_bdevs_discovered": 2, 00:08:45.125 "num_base_bdevs_operational": 2, 00:08:45.125 "base_bdevs_list": [ 00:08:45.125 { 00:08:45.125 "name": "BaseBdev1", 00:08:45.125 "uuid": "da14206b-514a-485c-904b-c7619c2a99d7", 00:08:45.125 "is_configured": true, 00:08:45.125 "data_offset": 0, 00:08:45.125 "data_size": 65536 00:08:45.125 }, 00:08:45.125 { 00:08:45.125 "name": "BaseBdev2", 00:08:45.125 "uuid": "7b1c2433-fedb-4ac0-aa2f-6a5a3fb7aff6", 00:08:45.125 "is_configured": true, 00:08:45.125 "data_offset": 0, 00:08:45.125 "data_size": 65536 00:08:45.125 } 00:08:45.125 ] 00:08:45.125 } 00:08:45.125 } 00:08:45.125 }' 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:45.125 BaseBdev2' 00:08:45.125 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.385 [2024-11-10 15:17:51.601786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.385 "name": "Existed_Raid", 00:08:45.385 "uuid": "42d0e347-2615-4736-9618-8e4fb5d983f5", 00:08:45.385 "strip_size_kb": 0, 00:08:45.385 "state": "online", 00:08:45.385 "raid_level": "raid1", 00:08:45.385 "superblock": false, 00:08:45.385 "num_base_bdevs": 2, 00:08:45.385 "num_base_bdevs_discovered": 1, 00:08:45.385 "num_base_bdevs_operational": 1, 00:08:45.385 "base_bdevs_list": [ 00:08:45.385 { 00:08:45.385 "name": null, 00:08:45.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.385 "is_configured": false, 00:08:45.385 "data_offset": 0, 00:08:45.385 "data_size": 65536 00:08:45.385 }, 00:08:45.385 { 00:08:45.385 "name": "BaseBdev2", 00:08:45.385 "uuid": "7b1c2433-fedb-4ac0-aa2f-6a5a3fb7aff6", 00:08:45.385 "is_configured": true, 00:08:45.385 "data_offset": 0, 00:08:45.385 "data_size": 65536 00:08:45.385 } 00:08:45.385 ] 00:08:45.385 }' 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.385 15:17:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.963 [2024-11-10 15:17:52.101564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.963 [2024-11-10 15:17:52.101731] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.963 [2024-11-10 15:17:52.113400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.963 [2024-11-10 15:17:52.113461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.963 [2024-11-10 15:17:52.113483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 75370 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 75370 ']' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 75370 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75370 00:08:45.963 killing process with pid 75370 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75370' 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 75370 00:08:45.963 [2024-11-10 15:17:52.207745] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.963 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 75370 00:08:45.963 [2024-11-10 15:17:52.208790] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.238 00:08:46.238 real 0m3.899s 00:08:46.238 user 0m6.155s 00:08:46.238 sys 0m0.788s 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.238 ************************************ 00:08:46.238 END TEST raid_state_function_test 00:08:46.238 ************************************ 00:08:46.238 15:17:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:46.238 15:17:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:46.238 15:17:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:46.238 15:17:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.238 ************************************ 00:08:46.238 START TEST raid_state_function_test_sb 00:08:46.238 ************************************ 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75602 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75602' 00:08:46.238 Process raid pid: 75602 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75602 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 75602 ']' 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.238 15:17:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.238 [2024-11-10 15:17:52.596175] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:46.238 [2024-11-10 15:17:52.596390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.497 [2024-11-10 15:17:52.731354] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.497 [2024-11-10 15:17:52.769220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.497 [2024-11-10 15:17:52.796012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.497 [2024-11-10 15:17:52.840282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.497 [2024-11-10 15:17:52.840322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.068 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:47.068 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:08:47.068 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.068 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.068 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.068 [2024-11-10 15:17:53.423653] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.068 [2024-11-10 15:17:53.423713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.068 [2024-11-10 15:17:53.423729] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.068 [2024-11-10 15:17:53.423739] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.328 "name": "Existed_Raid", 00:08:47.328 "uuid": "32d44f02-2962-4c6a-a182-304257a08a83", 00:08:47.328 "strip_size_kb": 0, 00:08:47.328 "state": "configuring", 00:08:47.328 "raid_level": "raid1", 00:08:47.328 "superblock": true, 00:08:47.328 "num_base_bdevs": 2, 00:08:47.328 "num_base_bdevs_discovered": 0, 00:08:47.328 "num_base_bdevs_operational": 2, 00:08:47.328 "base_bdevs_list": [ 00:08:47.328 { 00:08:47.328 "name": "BaseBdev1", 00:08:47.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.328 "is_configured": false, 00:08:47.328 "data_offset": 0, 00:08:47.328 "data_size": 0 00:08:47.328 }, 00:08:47.328 { 00:08:47.328 "name": "BaseBdev2", 00:08:47.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.328 "is_configured": false, 00:08:47.328 "data_offset": 0, 00:08:47.328 "data_size": 0 00:08:47.328 } 00:08:47.328 ] 00:08:47.328 }' 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.328 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 [2024-11-10 15:17:53.851690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.588 [2024-11-10 15:17:53.851795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 [2024-11-10 15:17:53.863732] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.588 [2024-11-10 15:17:53.863832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.588 [2024-11-10 15:17:53.863872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.588 [2024-11-10 15:17:53.863903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 [2024-11-10 15:17:53.884902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.588 BaseBdev1 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 [ 00:08:47.588 { 00:08:47.588 "name": "BaseBdev1", 00:08:47.588 "aliases": [ 00:08:47.588 "bb89e459-8a23-4ff8-ada6-d37bbb3e8d5a" 00:08:47.588 ], 00:08:47.588 "product_name": "Malloc disk", 00:08:47.588 "block_size": 512, 00:08:47.588 "num_blocks": 65536, 00:08:47.588 "uuid": "bb89e459-8a23-4ff8-ada6-d37bbb3e8d5a", 00:08:47.588 "assigned_rate_limits": { 00:08:47.588 "rw_ios_per_sec": 0, 00:08:47.588 "rw_mbytes_per_sec": 0, 00:08:47.588 "r_mbytes_per_sec": 0, 00:08:47.588 "w_mbytes_per_sec": 0 00:08:47.588 }, 00:08:47.588 "claimed": true, 00:08:47.588 "claim_type": "exclusive_write", 00:08:47.588 "zoned": false, 00:08:47.588 "supported_io_types": { 00:08:47.588 "read": true, 00:08:47.588 "write": true, 00:08:47.588 "unmap": true, 00:08:47.588 "flush": true, 00:08:47.588 "reset": true, 00:08:47.588 "nvme_admin": false, 00:08:47.588 "nvme_io": false, 00:08:47.588 "nvme_io_md": false, 00:08:47.588 "write_zeroes": true, 00:08:47.588 "zcopy": true, 00:08:47.588 "get_zone_info": false, 00:08:47.588 "zone_management": false, 00:08:47.588 "zone_append": false, 00:08:47.588 "compare": false, 00:08:47.588 "compare_and_write": false, 00:08:47.588 "abort": true, 00:08:47.588 "seek_hole": false, 00:08:47.588 "seek_data": false, 00:08:47.588 "copy": true, 00:08:47.588 "nvme_iov_md": false 00:08:47.588 }, 00:08:47.588 "memory_domains": [ 00:08:47.588 { 00:08:47.588 "dma_device_id": "system", 00:08:47.588 "dma_device_type": 1 00:08:47.588 }, 00:08:47.588 { 00:08:47.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.588 "dma_device_type": 2 00:08:47.588 } 00:08:47.588 ], 00:08:47.588 "driver_specific": {} 00:08:47.588 } 00:08:47.588 ] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.588 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.848 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.848 "name": "Existed_Raid", 00:08:47.848 "uuid": "3f6bf0cd-1164-443a-af2e-ee2ed5f70b1f", 00:08:47.848 "strip_size_kb": 0, 00:08:47.848 "state": "configuring", 00:08:47.848 "raid_level": "raid1", 00:08:47.848 "superblock": true, 00:08:47.848 "num_base_bdevs": 2, 00:08:47.848 "num_base_bdevs_discovered": 1, 00:08:47.848 "num_base_bdevs_operational": 2, 00:08:47.848 "base_bdevs_list": [ 00:08:47.848 { 00:08:47.848 "name": "BaseBdev1", 00:08:47.848 "uuid": "bb89e459-8a23-4ff8-ada6-d37bbb3e8d5a", 00:08:47.848 "is_configured": true, 00:08:47.848 "data_offset": 2048, 00:08:47.848 "data_size": 63488 00:08:47.848 }, 00:08:47.848 { 00:08:47.848 "name": "BaseBdev2", 00:08:47.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.848 "is_configured": false, 00:08:47.848 "data_offset": 0, 00:08:47.848 "data_size": 0 00:08:47.848 } 00:08:47.848 ] 00:08:47.848 }' 00:08:47.848 15:17:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.848 15:17:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 [2024-11-10 15:17:54.353104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.108 [2024-11-10 15:17:54.353240] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 [2024-11-10 15:17:54.365142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.108 [2024-11-10 15:17:54.367050] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.108 [2024-11-10 15:17:54.367144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.108 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.108 "name": "Existed_Raid", 00:08:48.108 "uuid": "de5242f8-3198-46d1-a5fa-5f059a06ce19", 00:08:48.108 "strip_size_kb": 0, 00:08:48.108 "state": "configuring", 00:08:48.108 "raid_level": "raid1", 00:08:48.108 "superblock": true, 00:08:48.108 "num_base_bdevs": 2, 00:08:48.108 "num_base_bdevs_discovered": 1, 00:08:48.108 "num_base_bdevs_operational": 2, 00:08:48.108 "base_bdevs_list": [ 00:08:48.108 { 00:08:48.108 "name": "BaseBdev1", 00:08:48.109 "uuid": "bb89e459-8a23-4ff8-ada6-d37bbb3e8d5a", 00:08:48.109 "is_configured": true, 00:08:48.109 "data_offset": 2048, 00:08:48.109 "data_size": 63488 00:08:48.109 }, 00:08:48.109 { 00:08:48.109 "name": "BaseBdev2", 00:08:48.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.109 "is_configured": false, 00:08:48.109 "data_offset": 0, 00:08:48.109 "data_size": 0 00:08:48.109 } 00:08:48.109 ] 00:08:48.109 }' 00:08:48.109 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.109 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 [2024-11-10 15:17:54.780594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.678 [2024-11-10 15:17:54.780810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:48.678 [2024-11-10 15:17:54.780831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.678 BaseBdev2 00:08:48.678 [2024-11-10 15:17:54.781165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:48.678 [2024-11-10 15:17:54.781341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:48.678 [2024-11-10 15:17:54.781361] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:08:48.678 [2024-11-10 15:17:54.781509] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 [ 00:08:48.678 { 00:08:48.678 "name": "BaseBdev2", 00:08:48.678 "aliases": [ 00:08:48.678 "94d93525-de72-42ea-bcaf-7140cebaa658" 00:08:48.678 ], 00:08:48.678 "product_name": "Malloc disk", 00:08:48.678 "block_size": 512, 00:08:48.678 "num_blocks": 65536, 00:08:48.678 "uuid": "94d93525-de72-42ea-bcaf-7140cebaa658", 00:08:48.678 "assigned_rate_limits": { 00:08:48.678 "rw_ios_per_sec": 0, 00:08:48.678 "rw_mbytes_per_sec": 0, 00:08:48.678 "r_mbytes_per_sec": 0, 00:08:48.678 "w_mbytes_per_sec": 0 00:08:48.678 }, 00:08:48.678 "claimed": true, 00:08:48.678 "claim_type": "exclusive_write", 00:08:48.678 "zoned": false, 00:08:48.678 "supported_io_types": { 00:08:48.678 "read": true, 00:08:48.678 "write": true, 00:08:48.678 "unmap": true, 00:08:48.678 "flush": true, 00:08:48.678 "reset": true, 00:08:48.678 "nvme_admin": false, 00:08:48.678 "nvme_io": false, 00:08:48.678 "nvme_io_md": false, 00:08:48.678 "write_zeroes": true, 00:08:48.678 "zcopy": true, 00:08:48.678 "get_zone_info": false, 00:08:48.678 "zone_management": false, 00:08:48.678 "zone_append": false, 00:08:48.678 "compare": false, 00:08:48.678 "compare_and_write": false, 00:08:48.678 "abort": true, 00:08:48.678 "seek_hole": false, 00:08:48.678 "seek_data": false, 00:08:48.678 "copy": true, 00:08:48.678 "nvme_iov_md": false 00:08:48.678 }, 00:08:48.678 "memory_domains": [ 00:08:48.678 { 00:08:48.678 "dma_device_id": "system", 00:08:48.678 "dma_device_type": 1 00:08:48.678 }, 00:08:48.678 { 00:08:48.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.678 "dma_device_type": 2 00:08:48.678 } 00:08:48.678 ], 00:08:48.678 "driver_specific": {} 00:08:48.678 } 00:08:48.678 ] 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.678 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.678 "name": "Existed_Raid", 00:08:48.678 "uuid": "de5242f8-3198-46d1-a5fa-5f059a06ce19", 00:08:48.678 "strip_size_kb": 0, 00:08:48.678 "state": "online", 00:08:48.679 "raid_level": "raid1", 00:08:48.679 "superblock": true, 00:08:48.679 "num_base_bdevs": 2, 00:08:48.679 "num_base_bdevs_discovered": 2, 00:08:48.679 "num_base_bdevs_operational": 2, 00:08:48.679 "base_bdevs_list": [ 00:08:48.679 { 00:08:48.679 "name": "BaseBdev1", 00:08:48.679 "uuid": "bb89e459-8a23-4ff8-ada6-d37bbb3e8d5a", 00:08:48.679 "is_configured": true, 00:08:48.679 "data_offset": 2048, 00:08:48.679 "data_size": 63488 00:08:48.679 }, 00:08:48.679 { 00:08:48.679 "name": "BaseBdev2", 00:08:48.679 "uuid": "94d93525-de72-42ea-bcaf-7140cebaa658", 00:08:48.679 "is_configured": true, 00:08:48.679 "data_offset": 2048, 00:08:48.679 "data_size": 63488 00:08:48.679 } 00:08:48.679 ] 00:08:48.679 }' 00:08:48.679 15:17:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.679 15:17:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.939 [2024-11-10 15:17:55.197007] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.939 "name": "Existed_Raid", 00:08:48.939 "aliases": [ 00:08:48.939 "de5242f8-3198-46d1-a5fa-5f059a06ce19" 00:08:48.939 ], 00:08:48.939 "product_name": "Raid Volume", 00:08:48.939 "block_size": 512, 00:08:48.939 "num_blocks": 63488, 00:08:48.939 "uuid": "de5242f8-3198-46d1-a5fa-5f059a06ce19", 00:08:48.939 "assigned_rate_limits": { 00:08:48.939 "rw_ios_per_sec": 0, 00:08:48.939 "rw_mbytes_per_sec": 0, 00:08:48.939 "r_mbytes_per_sec": 0, 00:08:48.939 "w_mbytes_per_sec": 0 00:08:48.939 }, 00:08:48.939 "claimed": false, 00:08:48.939 "zoned": false, 00:08:48.939 "supported_io_types": { 00:08:48.939 "read": true, 00:08:48.939 "write": true, 00:08:48.939 "unmap": false, 00:08:48.939 "flush": false, 00:08:48.939 "reset": true, 00:08:48.939 "nvme_admin": false, 00:08:48.939 "nvme_io": false, 00:08:48.939 "nvme_io_md": false, 00:08:48.939 "write_zeroes": true, 00:08:48.939 "zcopy": false, 00:08:48.939 "get_zone_info": false, 00:08:48.939 "zone_management": false, 00:08:48.939 "zone_append": false, 00:08:48.939 "compare": false, 00:08:48.939 "compare_and_write": false, 00:08:48.939 "abort": false, 00:08:48.939 "seek_hole": false, 00:08:48.939 "seek_data": false, 00:08:48.939 "copy": false, 00:08:48.939 "nvme_iov_md": false 00:08:48.939 }, 00:08:48.939 "memory_domains": [ 00:08:48.939 { 00:08:48.939 "dma_device_id": "system", 00:08:48.939 "dma_device_type": 1 00:08:48.939 }, 00:08:48.939 { 00:08:48.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.939 "dma_device_type": 2 00:08:48.939 }, 00:08:48.939 { 00:08:48.939 "dma_device_id": "system", 00:08:48.939 "dma_device_type": 1 00:08:48.939 }, 00:08:48.939 { 00:08:48.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.939 "dma_device_type": 2 00:08:48.939 } 00:08:48.939 ], 00:08:48.939 "driver_specific": { 00:08:48.939 "raid": { 00:08:48.939 "uuid": "de5242f8-3198-46d1-a5fa-5f059a06ce19", 00:08:48.939 "strip_size_kb": 0, 00:08:48.939 "state": "online", 00:08:48.939 "raid_level": "raid1", 00:08:48.939 "superblock": true, 00:08:48.939 "num_base_bdevs": 2, 00:08:48.939 "num_base_bdevs_discovered": 2, 00:08:48.939 "num_base_bdevs_operational": 2, 00:08:48.939 "base_bdevs_list": [ 00:08:48.939 { 00:08:48.939 "name": "BaseBdev1", 00:08:48.939 "uuid": "bb89e459-8a23-4ff8-ada6-d37bbb3e8d5a", 00:08:48.939 "is_configured": true, 00:08:48.939 "data_offset": 2048, 00:08:48.939 "data_size": 63488 00:08:48.939 }, 00:08:48.939 { 00:08:48.939 "name": "BaseBdev2", 00:08:48.939 "uuid": "94d93525-de72-42ea-bcaf-7140cebaa658", 00:08:48.939 "is_configured": true, 00:08:48.939 "data_offset": 2048, 00:08:48.939 "data_size": 63488 00:08:48.939 } 00:08:48.939 ] 00:08:48.939 } 00:08:48.939 } 00:08:48.939 }' 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:48.939 BaseBdev2' 00:08:48.939 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.199 [2024-11-10 15:17:55.416849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.199 "name": "Existed_Raid", 00:08:49.199 "uuid": "de5242f8-3198-46d1-a5fa-5f059a06ce19", 00:08:49.199 "strip_size_kb": 0, 00:08:49.199 "state": "online", 00:08:49.199 "raid_level": "raid1", 00:08:49.199 "superblock": true, 00:08:49.199 "num_base_bdevs": 2, 00:08:49.199 "num_base_bdevs_discovered": 1, 00:08:49.199 "num_base_bdevs_operational": 1, 00:08:49.199 "base_bdevs_list": [ 00:08:49.199 { 00:08:49.199 "name": null, 00:08:49.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.199 "is_configured": false, 00:08:49.199 "data_offset": 0, 00:08:49.199 "data_size": 63488 00:08:49.199 }, 00:08:49.199 { 00:08:49.199 "name": "BaseBdev2", 00:08:49.199 "uuid": "94d93525-de72-42ea-bcaf-7140cebaa658", 00:08:49.199 "is_configured": true, 00:08:49.199 "data_offset": 2048, 00:08:49.199 "data_size": 63488 00:08:49.199 } 00:08:49.199 ] 00:08:49.199 }' 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.199 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.768 [2024-11-10 15:17:55.908538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.768 [2024-11-10 15:17:55.908703] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.768 [2024-11-10 15:17:55.920431] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.768 [2024-11-10 15:17:55.920583] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.768 [2024-11-10 15:17:55.920630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75602 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 75602 ']' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 75602 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75602 00:08:49.768 killing process with pid 75602 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:49.768 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75602' 00:08:49.769 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 75602 00:08:49.769 [2024-11-10 15:17:56.000554] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.769 15:17:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 75602 00:08:49.769 [2024-11-10 15:17:56.001568] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.028 15:17:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:50.028 00:08:50.028 real 0m3.714s 00:08:50.028 user 0m5.837s 00:08:50.028 sys 0m0.746s 00:08:50.028 15:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:50.028 15:17:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.028 ************************************ 00:08:50.028 END TEST raid_state_function_test_sb 00:08:50.028 ************************************ 00:08:50.028 15:17:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:50.028 15:17:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:50.028 15:17:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:50.028 15:17:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.028 ************************************ 00:08:50.028 START TEST raid_superblock_test 00:08:50.028 ************************************ 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75842 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75842 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 75842 ']' 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:50.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:50.028 15:17:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.028 [2024-11-10 15:17:56.374474] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:50.028 [2024-11-10 15:17:56.374677] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75842 ] 00:08:50.288 [2024-11-10 15:17:56.505586] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.288 [2024-11-10 15:17:56.547328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.288 [2024-11-10 15:17:56.571680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.288 [2024-11-10 15:17:56.614937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.288 [2024-11-10 15:17:56.614976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.858 malloc1 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.858 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.858 [2024-11-10 15:17:57.218961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:50.858 [2024-11-10 15:17:57.219115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.858 [2024-11-10 15:17:57.219203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:50.858 [2024-11-10 15:17:57.219252] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.118 [2024-11-10 15:17:57.221365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.118 [2024-11-10 15:17:57.221458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.118 pt1 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.118 malloc2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.118 [2024-11-10 15:17:57.251645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.118 [2024-11-10 15:17:57.251701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.118 [2024-11-10 15:17:57.251721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:51.118 [2024-11-10 15:17:57.251731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.118 [2024-11-10 15:17:57.253770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.118 [2024-11-10 15:17:57.253809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.118 pt2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.118 [2024-11-10 15:17:57.263676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.118 [2024-11-10 15:17:57.265508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.118 [2024-11-10 15:17:57.265661] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:08:51.118 [2024-11-10 15:17:57.265680] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.118 [2024-11-10 15:17:57.265938] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:08:51.118 [2024-11-10 15:17:57.266094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:08:51.118 [2024-11-10 15:17:57.266108] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:08:51.118 [2024-11-10 15:17:57.266235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.118 "name": "raid_bdev1", 00:08:51.118 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:51.118 "strip_size_kb": 0, 00:08:51.118 "state": "online", 00:08:51.118 "raid_level": "raid1", 00:08:51.118 "superblock": true, 00:08:51.118 "num_base_bdevs": 2, 00:08:51.118 "num_base_bdevs_discovered": 2, 00:08:51.118 "num_base_bdevs_operational": 2, 00:08:51.118 "base_bdevs_list": [ 00:08:51.118 { 00:08:51.118 "name": "pt1", 00:08:51.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.118 "is_configured": true, 00:08:51.118 "data_offset": 2048, 00:08:51.118 "data_size": 63488 00:08:51.118 }, 00:08:51.118 { 00:08:51.118 "name": "pt2", 00:08:51.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.118 "is_configured": true, 00:08:51.118 "data_offset": 2048, 00:08:51.118 "data_size": 63488 00:08:51.118 } 00:08:51.118 ] 00:08:51.118 }' 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.118 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.378 [2024-11-10 15:17:57.680059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.378 "name": "raid_bdev1", 00:08:51.378 "aliases": [ 00:08:51.378 "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79" 00:08:51.378 ], 00:08:51.378 "product_name": "Raid Volume", 00:08:51.378 "block_size": 512, 00:08:51.378 "num_blocks": 63488, 00:08:51.378 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:51.378 "assigned_rate_limits": { 00:08:51.378 "rw_ios_per_sec": 0, 00:08:51.378 "rw_mbytes_per_sec": 0, 00:08:51.378 "r_mbytes_per_sec": 0, 00:08:51.378 "w_mbytes_per_sec": 0 00:08:51.378 }, 00:08:51.378 "claimed": false, 00:08:51.378 "zoned": false, 00:08:51.378 "supported_io_types": { 00:08:51.378 "read": true, 00:08:51.378 "write": true, 00:08:51.378 "unmap": false, 00:08:51.378 "flush": false, 00:08:51.378 "reset": true, 00:08:51.378 "nvme_admin": false, 00:08:51.378 "nvme_io": false, 00:08:51.378 "nvme_io_md": false, 00:08:51.378 "write_zeroes": true, 00:08:51.378 "zcopy": false, 00:08:51.378 "get_zone_info": false, 00:08:51.378 "zone_management": false, 00:08:51.378 "zone_append": false, 00:08:51.378 "compare": false, 00:08:51.378 "compare_and_write": false, 00:08:51.378 "abort": false, 00:08:51.378 "seek_hole": false, 00:08:51.378 "seek_data": false, 00:08:51.378 "copy": false, 00:08:51.378 "nvme_iov_md": false 00:08:51.378 }, 00:08:51.378 "memory_domains": [ 00:08:51.378 { 00:08:51.378 "dma_device_id": "system", 00:08:51.378 "dma_device_type": 1 00:08:51.378 }, 00:08:51.378 { 00:08:51.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.378 "dma_device_type": 2 00:08:51.378 }, 00:08:51.378 { 00:08:51.378 "dma_device_id": "system", 00:08:51.378 "dma_device_type": 1 00:08:51.378 }, 00:08:51.378 { 00:08:51.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.378 "dma_device_type": 2 00:08:51.378 } 00:08:51.378 ], 00:08:51.378 "driver_specific": { 00:08:51.378 "raid": { 00:08:51.378 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:51.378 "strip_size_kb": 0, 00:08:51.378 "state": "online", 00:08:51.378 "raid_level": "raid1", 00:08:51.378 "superblock": true, 00:08:51.378 "num_base_bdevs": 2, 00:08:51.378 "num_base_bdevs_discovered": 2, 00:08:51.378 "num_base_bdevs_operational": 2, 00:08:51.378 "base_bdevs_list": [ 00:08:51.378 { 00:08:51.378 "name": "pt1", 00:08:51.378 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.378 "is_configured": true, 00:08:51.378 "data_offset": 2048, 00:08:51.378 "data_size": 63488 00:08:51.378 }, 00:08:51.378 { 00:08:51.378 "name": "pt2", 00:08:51.378 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.378 "is_configured": true, 00:08:51.378 "data_offset": 2048, 00:08:51.378 "data_size": 63488 00:08:51.378 } 00:08:51.378 ] 00:08:51.378 } 00:08:51.378 } 00:08:51.378 }' 00:08:51.378 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:51.638 pt2' 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.638 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:51.639 [2024-11-10 15:17:57.908074] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f6cb16c2-78ae-40a2-aa8f-0eba0a302a79 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f6cb16c2-78ae-40a2-aa8f-0eba0a302a79 ']' 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.639 [2024-11-10 15:17:57.955825] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.639 [2024-11-10 15:17:57.955851] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.639 [2024-11-10 15:17:57.955945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.639 [2024-11-10 15:17:57.956029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.639 [2024-11-10 15:17:57.956043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.639 15:17:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.899 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.900 [2024-11-10 15:17:58.091908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:51.900 [2024-11-10 15:17:58.093972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:51.900 [2024-11-10 15:17:58.094135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:51.900 [2024-11-10 15:17:58.094249] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:51.900 [2024-11-10 15:17:58.094325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:51.900 [2024-11-10 15:17:58.094343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:08:51.900 request: 00:08:51.900 { 00:08:51.900 "name": "raid_bdev1", 00:08:51.900 "raid_level": "raid1", 00:08:51.900 "base_bdevs": [ 00:08:51.900 "malloc1", 00:08:51.900 "malloc2" 00:08:51.900 ], 00:08:51.900 "superblock": false, 00:08:51.900 "method": "bdev_raid_create", 00:08:51.900 "req_id": 1 00:08:51.900 } 00:08:51.900 Got JSON-RPC error response 00:08:51.900 response: 00:08:51.900 { 00:08:51.900 "code": -17, 00:08:51.900 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:51.900 } 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.900 [2024-11-10 15:17:58.159894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.900 [2024-11-10 15:17:58.159995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.900 [2024-11-10 15:17:58.160042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:51.900 [2024-11-10 15:17:58.160083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.900 [2024-11-10 15:17:58.162238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.900 [2024-11-10 15:17:58.162336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.900 [2024-11-10 15:17:58.162432] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:51.900 [2024-11-10 15:17:58.162518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.900 pt1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.900 "name": "raid_bdev1", 00:08:51.900 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:51.900 "strip_size_kb": 0, 00:08:51.900 "state": "configuring", 00:08:51.900 "raid_level": "raid1", 00:08:51.900 "superblock": true, 00:08:51.900 "num_base_bdevs": 2, 00:08:51.900 "num_base_bdevs_discovered": 1, 00:08:51.900 "num_base_bdevs_operational": 2, 00:08:51.900 "base_bdevs_list": [ 00:08:51.900 { 00:08:51.900 "name": "pt1", 00:08:51.900 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.900 "is_configured": true, 00:08:51.900 "data_offset": 2048, 00:08:51.900 "data_size": 63488 00:08:51.900 }, 00:08:51.900 { 00:08:51.900 "name": null, 00:08:51.900 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.900 "is_configured": false, 00:08:51.900 "data_offset": 2048, 00:08:51.900 "data_size": 63488 00:08:51.900 } 00:08:51.900 ] 00:08:51.900 }' 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.900 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.478 [2024-11-10 15:17:58.595993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.478 [2024-11-10 15:17:58.596117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.478 [2024-11-10 15:17:58.596144] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:52.478 [2024-11-10 15:17:58.596158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.478 [2024-11-10 15:17:58.596572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.478 [2024-11-10 15:17:58.596594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.478 [2024-11-10 15:17:58.596658] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.478 [2024-11-10 15:17:58.596681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.478 [2024-11-10 15:17:58.596767] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:52.478 [2024-11-10 15:17:58.596779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.478 [2024-11-10 15:17:58.597007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.478 [2024-11-10 15:17:58.597169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:52.478 [2024-11-10 15:17:58.597181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:52.478 [2024-11-10 15:17:58.597296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.478 pt2 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.478 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.479 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.479 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.479 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.479 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.479 "name": "raid_bdev1", 00:08:52.479 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:52.479 "strip_size_kb": 0, 00:08:52.479 "state": "online", 00:08:52.479 "raid_level": "raid1", 00:08:52.479 "superblock": true, 00:08:52.479 "num_base_bdevs": 2, 00:08:52.479 "num_base_bdevs_discovered": 2, 00:08:52.479 "num_base_bdevs_operational": 2, 00:08:52.479 "base_bdevs_list": [ 00:08:52.479 { 00:08:52.479 "name": "pt1", 00:08:52.479 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.479 "is_configured": true, 00:08:52.479 "data_offset": 2048, 00:08:52.479 "data_size": 63488 00:08:52.479 }, 00:08:52.479 { 00:08:52.479 "name": "pt2", 00:08:52.479 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.479 "is_configured": true, 00:08:52.479 "data_offset": 2048, 00:08:52.479 "data_size": 63488 00:08:52.479 } 00:08:52.479 ] 00:08:52.479 }' 00:08:52.479 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.479 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.746 15:17:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.746 [2024-11-10 15:17:58.992367] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.746 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.746 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.746 "name": "raid_bdev1", 00:08:52.746 "aliases": [ 00:08:52.746 "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79" 00:08:52.746 ], 00:08:52.746 "product_name": "Raid Volume", 00:08:52.746 "block_size": 512, 00:08:52.746 "num_blocks": 63488, 00:08:52.746 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:52.746 "assigned_rate_limits": { 00:08:52.746 "rw_ios_per_sec": 0, 00:08:52.746 "rw_mbytes_per_sec": 0, 00:08:52.746 "r_mbytes_per_sec": 0, 00:08:52.746 "w_mbytes_per_sec": 0 00:08:52.746 }, 00:08:52.746 "claimed": false, 00:08:52.746 "zoned": false, 00:08:52.746 "supported_io_types": { 00:08:52.746 "read": true, 00:08:52.746 "write": true, 00:08:52.746 "unmap": false, 00:08:52.746 "flush": false, 00:08:52.746 "reset": true, 00:08:52.746 "nvme_admin": false, 00:08:52.746 "nvme_io": false, 00:08:52.746 "nvme_io_md": false, 00:08:52.746 "write_zeroes": true, 00:08:52.746 "zcopy": false, 00:08:52.746 "get_zone_info": false, 00:08:52.746 "zone_management": false, 00:08:52.746 "zone_append": false, 00:08:52.746 "compare": false, 00:08:52.746 "compare_and_write": false, 00:08:52.746 "abort": false, 00:08:52.746 "seek_hole": false, 00:08:52.746 "seek_data": false, 00:08:52.746 "copy": false, 00:08:52.746 "nvme_iov_md": false 00:08:52.746 }, 00:08:52.746 "memory_domains": [ 00:08:52.746 { 00:08:52.746 "dma_device_id": "system", 00:08:52.746 "dma_device_type": 1 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.746 "dma_device_type": 2 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "system", 00:08:52.746 "dma_device_type": 1 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.746 "dma_device_type": 2 00:08:52.746 } 00:08:52.746 ], 00:08:52.746 "driver_specific": { 00:08:52.746 "raid": { 00:08:52.746 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:52.746 "strip_size_kb": 0, 00:08:52.746 "state": "online", 00:08:52.746 "raid_level": "raid1", 00:08:52.746 "superblock": true, 00:08:52.746 "num_base_bdevs": 2, 00:08:52.746 "num_base_bdevs_discovered": 2, 00:08:52.746 "num_base_bdevs_operational": 2, 00:08:52.746 "base_bdevs_list": [ 00:08:52.746 { 00:08:52.746 "name": "pt1", 00:08:52.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.746 "is_configured": true, 00:08:52.746 "data_offset": 2048, 00:08:52.746 "data_size": 63488 00:08:52.746 }, 00:08:52.746 { 00:08:52.746 "name": "pt2", 00:08:52.746 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.746 "is_configured": true, 00:08:52.746 "data_offset": 2048, 00:08:52.746 "data_size": 63488 00:08:52.746 } 00:08:52.746 ] 00:08:52.746 } 00:08:52.746 } 00:08:52.746 }' 00:08:52.747 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.747 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.747 pt2' 00:08:52.747 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.006 [2024-11-10 15:17:59.204436] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f6cb16c2-78ae-40a2-aa8f-0eba0a302a79 '!=' f6cb16c2-78ae-40a2-aa8f-0eba0a302a79 ']' 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.006 [2024-11-10 15:17:59.248219] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.006 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.007 "name": "raid_bdev1", 00:08:53.007 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:53.007 "strip_size_kb": 0, 00:08:53.007 "state": "online", 00:08:53.007 "raid_level": "raid1", 00:08:53.007 "superblock": true, 00:08:53.007 "num_base_bdevs": 2, 00:08:53.007 "num_base_bdevs_discovered": 1, 00:08:53.007 "num_base_bdevs_operational": 1, 00:08:53.007 "base_bdevs_list": [ 00:08:53.007 { 00:08:53.007 "name": null, 00:08:53.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.007 "is_configured": false, 00:08:53.007 "data_offset": 0, 00:08:53.007 "data_size": 63488 00:08:53.007 }, 00:08:53.007 { 00:08:53.007 "name": "pt2", 00:08:53.007 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.007 "is_configured": true, 00:08:53.007 "data_offset": 2048, 00:08:53.007 "data_size": 63488 00:08:53.007 } 00:08:53.007 ] 00:08:53.007 }' 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.007 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.576 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.576 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.576 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.576 [2024-11-10 15:17:59.640285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.576 [2024-11-10 15:17:59.640363] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.576 [2024-11-10 15:17:59.640458] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.577 [2024-11-10 15:17:59.640522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.577 [2024-11-10 15:17:59.640586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.577 [2024-11-10 15:17:59.700307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.577 [2024-11-10 15:17:59.700366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.577 [2024-11-10 15:17:59.700399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:53.577 [2024-11-10 15:17:59.700411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.577 [2024-11-10 15:17:59.702589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.577 pt2 00:08:53.577 [2024-11-10 15:17:59.702679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.577 [2024-11-10 15:17:59.702761] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.577 [2024-11-10 15:17:59.702801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.577 [2024-11-10 15:17:59.702879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:53.577 [2024-11-10 15:17:59.702891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.577 [2024-11-10 15:17:59.703143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:08:53.577 [2024-11-10 15:17:59.703284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:53.577 [2024-11-10 15:17:59.703295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:53.577 [2024-11-10 15:17:59.703409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.577 "name": "raid_bdev1", 00:08:53.577 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:53.577 "strip_size_kb": 0, 00:08:53.577 "state": "online", 00:08:53.577 "raid_level": "raid1", 00:08:53.577 "superblock": true, 00:08:53.577 "num_base_bdevs": 2, 00:08:53.577 "num_base_bdevs_discovered": 1, 00:08:53.577 "num_base_bdevs_operational": 1, 00:08:53.577 "base_bdevs_list": [ 00:08:53.577 { 00:08:53.577 "name": null, 00:08:53.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.577 "is_configured": false, 00:08:53.577 "data_offset": 2048, 00:08:53.577 "data_size": 63488 00:08:53.577 }, 00:08:53.577 { 00:08:53.577 "name": "pt2", 00:08:53.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.577 "is_configured": true, 00:08:53.577 "data_offset": 2048, 00:08:53.577 "data_size": 63488 00:08:53.577 } 00:08:53.577 ] 00:08:53.577 }' 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.577 15:17:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.837 [2024-11-10 15:18:00.148504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:53.837 [2024-11-10 15:18:00.148614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:53.837 [2024-11-10 15:18:00.148734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:53.837 [2024-11-10 15:18:00.148819] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:53.837 [2024-11-10 15:18:00.148885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.837 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.097 [2024-11-10 15:18:00.208464] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.097 [2024-11-10 15:18:00.208533] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.097 [2024-11-10 15:18:00.208558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:54.097 [2024-11-10 15:18:00.208570] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.097 [2024-11-10 15:18:00.210749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.097 pt1 00:08:54.097 [2024-11-10 15:18:00.210840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.097 [2024-11-10 15:18:00.210952] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:54.097 [2024-11-10 15:18:00.210987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.097 [2024-11-10 15:18:00.211114] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:54.097 [2024-11-10 15:18:00.211127] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.097 [2024-11-10 15:18:00.211148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:08:54.097 [2024-11-10 15:18:00.211187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.097 [2024-11-10 15:18:00.211291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:08:54.097 [2024-11-10 15:18:00.211300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.097 [2024-11-10 15:18:00.211539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:54.097 [2024-11-10 15:18:00.211668] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:08:54.097 [2024-11-10 15:18:00.211684] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:08:54.097 [2024-11-10 15:18:00.211806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.097 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.097 "name": "raid_bdev1", 00:08:54.097 "uuid": "f6cb16c2-78ae-40a2-aa8f-0eba0a302a79", 00:08:54.097 "strip_size_kb": 0, 00:08:54.097 "state": "online", 00:08:54.098 "raid_level": "raid1", 00:08:54.098 "superblock": true, 00:08:54.098 "num_base_bdevs": 2, 00:08:54.098 "num_base_bdevs_discovered": 1, 00:08:54.098 "num_base_bdevs_operational": 1, 00:08:54.098 "base_bdevs_list": [ 00:08:54.098 { 00:08:54.098 "name": null, 00:08:54.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.098 "is_configured": false, 00:08:54.098 "data_offset": 2048, 00:08:54.098 "data_size": 63488 00:08:54.098 }, 00:08:54.098 { 00:08:54.098 "name": "pt2", 00:08:54.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.098 "is_configured": true, 00:08:54.098 "data_offset": 2048, 00:08:54.098 "data_size": 63488 00:08:54.098 } 00:08:54.098 ] 00:08:54.098 }' 00:08:54.098 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.098 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.357 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:54.357 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:54.357 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.358 [2024-11-10 15:18:00.640851] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f6cb16c2-78ae-40a2-aa8f-0eba0a302a79 '!=' f6cb16c2-78ae-40a2-aa8f-0eba0a302a79 ']' 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75842 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 75842 ']' 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 75842 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75842 00:08:54.358 killing process with pid 75842 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75842' 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 75842 00:08:54.358 [2024-11-10 15:18:00.706421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.358 [2024-11-10 15:18:00.706540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.358 [2024-11-10 15:18:00.706593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.358 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 75842 00:08:54.358 [2024-11-10 15:18:00.706606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:08:54.618 [2024-11-10 15:18:00.730208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.618 15:18:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:54.618 00:08:54.618 real 0m4.660s 00:08:54.618 user 0m7.585s 00:08:54.618 sys 0m0.981s 00:08:54.618 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.618 15:18:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.618 ************************************ 00:08:54.618 END TEST raid_superblock_test 00:08:54.618 ************************************ 00:08:54.878 15:18:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:54.878 15:18:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:54.878 15:18:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.878 15:18:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.878 ************************************ 00:08:54.878 START TEST raid_read_error_test 00:08:54.878 ************************************ 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 read 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yQPMeFuWMT 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76161 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76161 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 76161 ']' 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.878 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.878 [2024-11-10 15:18:01.118445] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:54.878 [2024-11-10 15:18:01.118563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76161 ] 00:08:55.137 [2024-11-10 15:18:01.251363] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:55.137 [2024-11-10 15:18:01.289329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.137 [2024-11-10 15:18:01.314782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.137 [2024-11-10 15:18:01.358685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.137 [2024-11-10 15:18:01.358729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.705 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:55.705 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 BaseBdev1_malloc 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 true 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 [2024-11-10 15:18:01.978788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:55.706 [2024-11-10 15:18:01.978860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.706 [2024-11-10 15:18:01.978879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:55.706 [2024-11-10 15:18:01.978901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.706 [2024-11-10 15:18:01.981002] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.706 [2024-11-10 15:18:01.981060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:55.706 BaseBdev1 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 BaseBdev2_malloc 00:08:55.706 15:18:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 true 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 [2024-11-10 15:18:02.019660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:55.706 [2024-11-10 15:18:02.019717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.706 [2024-11-10 15:18:02.019751] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:55.706 [2024-11-10 15:18:02.019764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.706 [2024-11-10 15:18:02.021819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.706 [2024-11-10 15:18:02.021866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:55.706 BaseBdev2 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 [2024-11-10 15:18:02.031676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.706 [2024-11-10 15:18:02.033578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.706 [2024-11-10 15:18:02.033761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:55.706 [2024-11-10 15:18:02.033783] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:55.706 [2024-11-10 15:18:02.034059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:55.706 [2024-11-10 15:18:02.034225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:55.706 [2024-11-10 15:18:02.034237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:55.706 [2024-11-10 15:18:02.034370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.706 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.966 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.966 "name": "raid_bdev1", 00:08:55.966 "uuid": "85dc5735-a2e1-449d-81fe-34bda0274560", 00:08:55.966 "strip_size_kb": 0, 00:08:55.966 "state": "online", 00:08:55.966 "raid_level": "raid1", 00:08:55.966 "superblock": true, 00:08:55.966 "num_base_bdevs": 2, 00:08:55.966 "num_base_bdevs_discovered": 2, 00:08:55.966 "num_base_bdevs_operational": 2, 00:08:55.966 "base_bdevs_list": [ 00:08:55.966 { 00:08:55.966 "name": "BaseBdev1", 00:08:55.966 "uuid": "c9741f75-591b-5ec8-a53c-2e745b22453f", 00:08:55.966 "is_configured": true, 00:08:55.966 "data_offset": 2048, 00:08:55.966 "data_size": 63488 00:08:55.966 }, 00:08:55.966 { 00:08:55.966 "name": "BaseBdev2", 00:08:55.966 "uuid": "970b221a-7f24-5f1b-afaf-273293cc8e1c", 00:08:55.966 "is_configured": true, 00:08:55.966 "data_offset": 2048, 00:08:55.966 "data_size": 63488 00:08:55.966 } 00:08:55.966 ] 00:08:55.966 }' 00:08:55.966 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.966 15:18:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.225 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:56.225 15:18:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:56.225 [2024-11-10 15:18:02.536229] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.165 "name": "raid_bdev1", 00:08:57.165 "uuid": "85dc5735-a2e1-449d-81fe-34bda0274560", 00:08:57.165 "strip_size_kb": 0, 00:08:57.165 "state": "online", 00:08:57.165 "raid_level": "raid1", 00:08:57.165 "superblock": true, 00:08:57.165 "num_base_bdevs": 2, 00:08:57.165 "num_base_bdevs_discovered": 2, 00:08:57.165 "num_base_bdevs_operational": 2, 00:08:57.165 "base_bdevs_list": [ 00:08:57.165 { 00:08:57.165 "name": "BaseBdev1", 00:08:57.165 "uuid": "c9741f75-591b-5ec8-a53c-2e745b22453f", 00:08:57.165 "is_configured": true, 00:08:57.165 "data_offset": 2048, 00:08:57.165 "data_size": 63488 00:08:57.165 }, 00:08:57.165 { 00:08:57.165 "name": "BaseBdev2", 00:08:57.165 "uuid": "970b221a-7f24-5f1b-afaf-273293cc8e1c", 00:08:57.165 "is_configured": true, 00:08:57.165 "data_offset": 2048, 00:08:57.165 "data_size": 63488 00:08:57.165 } 00:08:57.165 ] 00:08:57.165 }' 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.165 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.735 [2024-11-10 15:18:03.910446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:57.735 [2024-11-10 15:18:03.910562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.735 [2024-11-10 15:18:03.913145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.735 [2024-11-10 15:18:03.913241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.735 [2024-11-10 15:18:03.913368] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:57.735 [2024-11-10 15:18:03.913426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:08:57.735 { 00:08:57.735 "results": [ 00:08:57.735 { 00:08:57.735 "job": "raid_bdev1", 00:08:57.735 "core_mask": "0x1", 00:08:57.735 "workload": "randrw", 00:08:57.735 "percentage": 50, 00:08:57.735 "status": "finished", 00:08:57.735 "queue_depth": 1, 00:08:57.735 "io_size": 131072, 00:08:57.735 "runtime": 1.372344, 00:08:57.735 "iops": 19727.561019686025, 00:08:57.735 "mibps": 2465.945127460753, 00:08:57.735 "io_failed": 0, 00:08:57.735 "io_timeout": 0, 00:08:57.735 "avg_latency_us": 48.083052091592755, 00:08:57.735 "min_latency_us": 22.536389784711933, 00:08:57.735 "max_latency_us": 1385.2070077573433 00:08:57.735 } 00:08:57.735 ], 00:08:57.735 "core_count": 1 00:08:57.735 } 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76161 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 76161 ']' 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 76161 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76161 00:08:57.735 killing process with pid 76161 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76161' 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 76161 00:08:57.735 [2024-11-10 15:18:03.946882] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:57.735 15:18:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 76161 00:08:57.736 [2024-11-10 15:18:03.963027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yQPMeFuWMT 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:57.995 15:18:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:57.995 00:08:57.995 real 0m3.163s 00:08:57.995 user 0m4.010s 00:08:57.995 sys 0m0.513s 00:08:57.996 ************************************ 00:08:57.996 END TEST raid_read_error_test 00:08:57.996 ************************************ 00:08:57.996 15:18:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:57.996 15:18:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 15:18:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:57.996 15:18:04 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:57.996 15:18:04 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:57.996 15:18:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 ************************************ 00:08:57.996 START TEST raid_write_error_test 00:08:57.996 ************************************ 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 2 write 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oDc1589Q1s 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76290 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76290 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 76290 ']' 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.996 15:18:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.996 [2024-11-10 15:18:04.345743] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:08:57.996 [2024-11-10 15:18:04.345961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76290 ] 00:08:58.255 [2024-11-10 15:18:04.478833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:58.255 [2024-11-10 15:18:04.516877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.255 [2024-11-10 15:18:04.541635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.255 [2024-11-10 15:18:04.585144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.255 [2024-11-10 15:18:04.585196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.825 BaseBdev1_malloc 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:58.825 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 true 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 [2024-11-10 15:18:05.197291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:59.085 [2024-11-10 15:18:05.197351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.085 [2024-11-10 15:18:05.197386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:59.085 [2024-11-10 15:18:05.197400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.085 [2024-11-10 15:18:05.199579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.085 [2024-11-10 15:18:05.199625] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:59.085 BaseBdev1 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 BaseBdev2_malloc 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 true 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 [2024-11-10 15:18:05.238054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:59.085 [2024-11-10 15:18:05.238169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.085 [2024-11-10 15:18:05.238191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:59.085 [2024-11-10 15:18:05.238204] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.085 [2024-11-10 15:18:05.240306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.085 [2024-11-10 15:18:05.240350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:59.085 BaseBdev2 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 [2024-11-10 15:18:05.250072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.085 [2024-11-10 15:18:05.251940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:59.085 [2024-11-10 15:18:05.252146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:59.085 [2024-11-10 15:18:05.252171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:59.085 [2024-11-10 15:18:05.252431] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:08:59.085 [2024-11-10 15:18:05.252592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:59.085 [2024-11-10 15:18:05.252612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:08:59.085 [2024-11-10 15:18:05.252764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.085 "name": "raid_bdev1", 00:08:59.085 "uuid": "bcc74d4c-b0e0-4d07-b193-474ccb7faa79", 00:08:59.085 "strip_size_kb": 0, 00:08:59.085 "state": "online", 00:08:59.085 "raid_level": "raid1", 00:08:59.085 "superblock": true, 00:08:59.085 "num_base_bdevs": 2, 00:08:59.085 "num_base_bdevs_discovered": 2, 00:08:59.085 "num_base_bdevs_operational": 2, 00:08:59.085 "base_bdevs_list": [ 00:08:59.085 { 00:08:59.085 "name": "BaseBdev1", 00:08:59.085 "uuid": "12dc283a-683b-57e8-b882-733bfd6d6f89", 00:08:59.085 "is_configured": true, 00:08:59.085 "data_offset": 2048, 00:08:59.085 "data_size": 63488 00:08:59.085 }, 00:08:59.085 { 00:08:59.085 "name": "BaseBdev2", 00:08:59.085 "uuid": "286f04d4-2231-5d16-8c5d-b99bfa6026e1", 00:08:59.085 "is_configured": true, 00:08:59.085 "data_offset": 2048, 00:08:59.085 "data_size": 63488 00:08:59.085 } 00:08:59.085 ] 00:08:59.085 }' 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.085 15:18:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.345 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:59.345 15:18:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:59.604 [2024-11-10 15:18:05.766600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.543 [2024-11-10 15:18:06.684147] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:00.543 [2024-11-10 15:18:06.684305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.543 [2024-11-10 15:18:06.684563] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000067d0 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.543 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.544 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.544 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.544 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:00.544 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.544 "name": "raid_bdev1", 00:09:00.544 "uuid": "bcc74d4c-b0e0-4d07-b193-474ccb7faa79", 00:09:00.544 "strip_size_kb": 0, 00:09:00.544 "state": "online", 00:09:00.544 "raid_level": "raid1", 00:09:00.544 "superblock": true, 00:09:00.544 "num_base_bdevs": 2, 00:09:00.544 "num_base_bdevs_discovered": 1, 00:09:00.544 "num_base_bdevs_operational": 1, 00:09:00.544 "base_bdevs_list": [ 00:09:00.544 { 00:09:00.544 "name": null, 00:09:00.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.544 "is_configured": false, 00:09:00.544 "data_offset": 0, 00:09:00.544 "data_size": 63488 00:09:00.544 }, 00:09:00.544 { 00:09:00.544 "name": "BaseBdev2", 00:09:00.544 "uuid": "286f04d4-2231-5d16-8c5d-b99bfa6026e1", 00:09:00.544 "is_configured": true, 00:09:00.544 "data_offset": 2048, 00:09:00.544 "data_size": 63488 00:09:00.544 } 00:09:00.544 ] 00:09:00.544 }' 00:09:00.544 15:18:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.544 15:18:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.803 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:00.803 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:00.803 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.063 [2024-11-10 15:18:07.170114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:01.063 [2024-11-10 15:18:07.170156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.063 [2024-11-10 15:18:07.172819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.063 [2024-11-10 15:18:07.172871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.063 [2024-11-10 15:18:07.172938] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.063 [2024-11-10 15:18:07.172951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:01.063 { 00:09:01.063 "results": [ 00:09:01.063 { 00:09:01.063 "job": "raid_bdev1", 00:09:01.063 "core_mask": "0x1", 00:09:01.063 "workload": "randrw", 00:09:01.063 "percentage": 50, 00:09:01.063 "status": "finished", 00:09:01.063 "queue_depth": 1, 00:09:01.063 "io_size": 131072, 00:09:01.063 "runtime": 1.401447, 00:09:01.063 "iops": 22992.664010840224, 00:09:01.063 "mibps": 2874.083001355028, 00:09:01.063 "io_failed": 0, 00:09:01.063 "io_timeout": 0, 00:09:01.063 "avg_latency_us": 40.8163749721564, 00:09:01.063 "min_latency_us": 22.759522356837792, 00:09:01.063 "max_latency_us": 1349.5057962172057 00:09:01.063 } 00:09:01.063 ], 00:09:01.063 "core_count": 1 00:09:01.063 } 00:09:01.063 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.063 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76290 00:09:01.063 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 76290 ']' 00:09:01.063 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 76290 00:09:01.063 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76290 00:09:01.064 killing process with pid 76290 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76290' 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 76290 00:09:01.064 [2024-11-10 15:18:07.227935] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.064 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 76290 00:09:01.064 [2024-11-10 15:18:07.243842] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.323 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:01.323 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oDc1589Q1s 00:09:01.323 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:01.324 00:09:01.324 real 0m3.220s 00:09:01.324 user 0m4.116s 00:09:01.324 sys 0m0.508s 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:01.324 15:18:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.324 ************************************ 00:09:01.324 END TEST raid_write_error_test 00:09:01.324 ************************************ 00:09:01.324 15:18:07 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:01.324 15:18:07 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:01.324 15:18:07 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:01.324 15:18:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:01.324 15:18:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:01.324 15:18:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.324 ************************************ 00:09:01.324 START TEST raid_state_function_test 00:09:01.324 ************************************ 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 false 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76417 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76417' 00:09:01.324 Process raid pid: 76417 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76417 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 76417 ']' 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:01.324 15:18:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.324 [2024-11-10 15:18:07.633527] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:01.324 [2024-11-10 15:18:07.633742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.584 [2024-11-10 15:18:07.767252] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:01.584 [2024-11-10 15:18:07.806588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.584 [2024-11-10 15:18:07.831948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.584 [2024-11-10 15:18:07.875689] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.584 [2024-11-10 15:18:07.875841] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 [2024-11-10 15:18:08.455314] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.154 [2024-11-10 15:18:08.455466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.154 [2024-11-10 15:18:08.455523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.154 [2024-11-10 15:18:08.455552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.154 [2024-11-10 15:18:08.455583] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.154 [2024-11-10 15:18:08.455627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.154 "name": "Existed_Raid", 00:09:02.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.154 "strip_size_kb": 64, 00:09:02.154 "state": "configuring", 00:09:02.154 "raid_level": "raid0", 00:09:02.154 "superblock": false, 00:09:02.154 "num_base_bdevs": 3, 00:09:02.154 "num_base_bdevs_discovered": 0, 00:09:02.154 "num_base_bdevs_operational": 3, 00:09:02.154 "base_bdevs_list": [ 00:09:02.154 { 00:09:02.154 "name": "BaseBdev1", 00:09:02.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.154 "is_configured": false, 00:09:02.154 "data_offset": 0, 00:09:02.154 "data_size": 0 00:09:02.154 }, 00:09:02.154 { 00:09:02.154 "name": "BaseBdev2", 00:09:02.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.154 "is_configured": false, 00:09:02.154 "data_offset": 0, 00:09:02.154 "data_size": 0 00:09:02.154 }, 00:09:02.154 { 00:09:02.154 "name": "BaseBdev3", 00:09:02.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.154 "is_configured": false, 00:09:02.154 "data_offset": 0, 00:09:02.154 "data_size": 0 00:09:02.154 } 00:09:02.154 ] 00:09:02.154 }' 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.154 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 [2024-11-10 15:18:08.879320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.724 [2024-11-10 15:18:08.879431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 [2024-11-10 15:18:08.891338] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.724 [2024-11-10 15:18:08.891430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.724 [2024-11-10 15:18:08.891465] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.724 [2024-11-10 15:18:08.891491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.724 [2024-11-10 15:18:08.891516] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.724 [2024-11-10 15:18:08.891555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 [2024-11-10 15:18:08.912473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.724 BaseBdev1 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 [ 00:09:02.724 { 00:09:02.724 "name": "BaseBdev1", 00:09:02.724 "aliases": [ 00:09:02.724 "79eb098a-4740-417f-ac54-e1fd24db9499" 00:09:02.724 ], 00:09:02.724 "product_name": "Malloc disk", 00:09:02.724 "block_size": 512, 00:09:02.724 "num_blocks": 65536, 00:09:02.724 "uuid": "79eb098a-4740-417f-ac54-e1fd24db9499", 00:09:02.724 "assigned_rate_limits": { 00:09:02.724 "rw_ios_per_sec": 0, 00:09:02.724 "rw_mbytes_per_sec": 0, 00:09:02.724 "r_mbytes_per_sec": 0, 00:09:02.724 "w_mbytes_per_sec": 0 00:09:02.724 }, 00:09:02.724 "claimed": true, 00:09:02.724 "claim_type": "exclusive_write", 00:09:02.724 "zoned": false, 00:09:02.724 "supported_io_types": { 00:09:02.724 "read": true, 00:09:02.724 "write": true, 00:09:02.724 "unmap": true, 00:09:02.724 "flush": true, 00:09:02.724 "reset": true, 00:09:02.724 "nvme_admin": false, 00:09:02.724 "nvme_io": false, 00:09:02.724 "nvme_io_md": false, 00:09:02.724 "write_zeroes": true, 00:09:02.724 "zcopy": true, 00:09:02.724 "get_zone_info": false, 00:09:02.724 "zone_management": false, 00:09:02.724 "zone_append": false, 00:09:02.724 "compare": false, 00:09:02.724 "compare_and_write": false, 00:09:02.724 "abort": true, 00:09:02.724 "seek_hole": false, 00:09:02.724 "seek_data": false, 00:09:02.724 "copy": true, 00:09:02.724 "nvme_iov_md": false 00:09:02.724 }, 00:09:02.724 "memory_domains": [ 00:09:02.724 { 00:09:02.724 "dma_device_id": "system", 00:09:02.724 "dma_device_type": 1 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.724 "dma_device_type": 2 00:09:02.724 } 00:09:02.724 ], 00:09:02.724 "driver_specific": {} 00:09:02.724 } 00:09:02.724 ] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.724 15:18:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.724 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.724 "name": "Existed_Raid", 00:09:02.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.724 "strip_size_kb": 64, 00:09:02.724 "state": "configuring", 00:09:02.724 "raid_level": "raid0", 00:09:02.724 "superblock": false, 00:09:02.724 "num_base_bdevs": 3, 00:09:02.724 "num_base_bdevs_discovered": 1, 00:09:02.724 "num_base_bdevs_operational": 3, 00:09:02.724 "base_bdevs_list": [ 00:09:02.724 { 00:09:02.724 "name": "BaseBdev1", 00:09:02.724 "uuid": "79eb098a-4740-417f-ac54-e1fd24db9499", 00:09:02.724 "is_configured": true, 00:09:02.724 "data_offset": 0, 00:09:02.724 "data_size": 65536 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "name": "BaseBdev2", 00:09:02.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.724 "is_configured": false, 00:09:02.724 "data_offset": 0, 00:09:02.724 "data_size": 0 00:09:02.724 }, 00:09:02.724 { 00:09:02.724 "name": "BaseBdev3", 00:09:02.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.724 "is_configured": false, 00:09:02.724 "data_offset": 0, 00:09:02.724 "data_size": 0 00:09:02.724 } 00:09:02.724 ] 00:09:02.724 }' 00:09:02.724 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.724 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 [2024-11-10 15:18:09.356626] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.296 [2024-11-10 15:18:09.356694] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 [2024-11-10 15:18:09.368648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.296 [2024-11-10 15:18:09.370604] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.296 [2024-11-10 15:18:09.370652] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.296 [2024-11-10 15:18:09.370667] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.296 [2024-11-10 15:18:09.370677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.296 "name": "Existed_Raid", 00:09:03.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.296 "strip_size_kb": 64, 00:09:03.296 "state": "configuring", 00:09:03.296 "raid_level": "raid0", 00:09:03.296 "superblock": false, 00:09:03.296 "num_base_bdevs": 3, 00:09:03.296 "num_base_bdevs_discovered": 1, 00:09:03.296 "num_base_bdevs_operational": 3, 00:09:03.296 "base_bdevs_list": [ 00:09:03.296 { 00:09:03.296 "name": "BaseBdev1", 00:09:03.296 "uuid": "79eb098a-4740-417f-ac54-e1fd24db9499", 00:09:03.296 "is_configured": true, 00:09:03.296 "data_offset": 0, 00:09:03.296 "data_size": 65536 00:09:03.296 }, 00:09:03.296 { 00:09:03.296 "name": "BaseBdev2", 00:09:03.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.296 "is_configured": false, 00:09:03.296 "data_offset": 0, 00:09:03.296 "data_size": 0 00:09:03.296 }, 00:09:03.296 { 00:09:03.296 "name": "BaseBdev3", 00:09:03.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.296 "is_configured": false, 00:09:03.296 "data_offset": 0, 00:09:03.296 "data_size": 0 00:09:03.296 } 00:09:03.296 ] 00:09:03.296 }' 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.296 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.572 [2024-11-10 15:18:09.840383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.572 BaseBdev2 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.572 [ 00:09:03.572 { 00:09:03.572 "name": "BaseBdev2", 00:09:03.572 "aliases": [ 00:09:03.572 "f7737d1b-f5b6-4c87-ae97-a93d064cb038" 00:09:03.572 ], 00:09:03.572 "product_name": "Malloc disk", 00:09:03.572 "block_size": 512, 00:09:03.572 "num_blocks": 65536, 00:09:03.572 "uuid": "f7737d1b-f5b6-4c87-ae97-a93d064cb038", 00:09:03.572 "assigned_rate_limits": { 00:09:03.572 "rw_ios_per_sec": 0, 00:09:03.572 "rw_mbytes_per_sec": 0, 00:09:03.572 "r_mbytes_per_sec": 0, 00:09:03.572 "w_mbytes_per_sec": 0 00:09:03.572 }, 00:09:03.572 "claimed": true, 00:09:03.572 "claim_type": "exclusive_write", 00:09:03.572 "zoned": false, 00:09:03.572 "supported_io_types": { 00:09:03.572 "read": true, 00:09:03.572 "write": true, 00:09:03.572 "unmap": true, 00:09:03.572 "flush": true, 00:09:03.572 "reset": true, 00:09:03.572 "nvme_admin": false, 00:09:03.572 "nvme_io": false, 00:09:03.572 "nvme_io_md": false, 00:09:03.572 "write_zeroes": true, 00:09:03.572 "zcopy": true, 00:09:03.572 "get_zone_info": false, 00:09:03.572 "zone_management": false, 00:09:03.572 "zone_append": false, 00:09:03.572 "compare": false, 00:09:03.572 "compare_and_write": false, 00:09:03.572 "abort": true, 00:09:03.572 "seek_hole": false, 00:09:03.572 "seek_data": false, 00:09:03.572 "copy": true, 00:09:03.572 "nvme_iov_md": false 00:09:03.572 }, 00:09:03.572 "memory_domains": [ 00:09:03.572 { 00:09:03.572 "dma_device_id": "system", 00:09:03.572 "dma_device_type": 1 00:09:03.572 }, 00:09:03.572 { 00:09:03.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.572 "dma_device_type": 2 00:09:03.572 } 00:09:03.572 ], 00:09:03.572 "driver_specific": {} 00:09:03.572 } 00:09:03.572 ] 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.572 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.829 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.829 "name": "Existed_Raid", 00:09:03.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.829 "strip_size_kb": 64, 00:09:03.829 "state": "configuring", 00:09:03.829 "raid_level": "raid0", 00:09:03.829 "superblock": false, 00:09:03.829 "num_base_bdevs": 3, 00:09:03.829 "num_base_bdevs_discovered": 2, 00:09:03.829 "num_base_bdevs_operational": 3, 00:09:03.829 "base_bdevs_list": [ 00:09:03.829 { 00:09:03.829 "name": "BaseBdev1", 00:09:03.829 "uuid": "79eb098a-4740-417f-ac54-e1fd24db9499", 00:09:03.829 "is_configured": true, 00:09:03.829 "data_offset": 0, 00:09:03.830 "data_size": 65536 00:09:03.830 }, 00:09:03.830 { 00:09:03.830 "name": "BaseBdev2", 00:09:03.830 "uuid": "f7737d1b-f5b6-4c87-ae97-a93d064cb038", 00:09:03.830 "is_configured": true, 00:09:03.830 "data_offset": 0, 00:09:03.830 "data_size": 65536 00:09:03.830 }, 00:09:03.830 { 00:09:03.830 "name": "BaseBdev3", 00:09:03.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.830 "is_configured": false, 00:09:03.830 "data_offset": 0, 00:09:03.830 "data_size": 0 00:09:03.830 } 00:09:03.830 ] 00:09:03.830 }' 00:09:03.830 15:18:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.830 15:18:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.088 [2024-11-10 15:18:10.358997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.088 [2024-11-10 15:18:10.359086] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:04.088 [2024-11-10 15:18:10.359104] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:04.088 [2024-11-10 15:18:10.359561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:04.088 [2024-11-10 15:18:10.359820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:04.088 [2024-11-10 15:18:10.359857] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:04.088 [2024-11-10 15:18:10.360179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.088 BaseBdev3 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.088 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.089 [ 00:09:04.089 { 00:09:04.089 "name": "BaseBdev3", 00:09:04.089 "aliases": [ 00:09:04.089 "c3a9c123-840d-4cc5-9534-c383f8ff1105" 00:09:04.089 ], 00:09:04.089 "product_name": "Malloc disk", 00:09:04.089 "block_size": 512, 00:09:04.089 "num_blocks": 65536, 00:09:04.089 "uuid": "c3a9c123-840d-4cc5-9534-c383f8ff1105", 00:09:04.089 "assigned_rate_limits": { 00:09:04.089 "rw_ios_per_sec": 0, 00:09:04.089 "rw_mbytes_per_sec": 0, 00:09:04.089 "r_mbytes_per_sec": 0, 00:09:04.089 "w_mbytes_per_sec": 0 00:09:04.089 }, 00:09:04.089 "claimed": true, 00:09:04.089 "claim_type": "exclusive_write", 00:09:04.089 "zoned": false, 00:09:04.089 "supported_io_types": { 00:09:04.089 "read": true, 00:09:04.089 "write": true, 00:09:04.089 "unmap": true, 00:09:04.089 "flush": true, 00:09:04.089 "reset": true, 00:09:04.089 "nvme_admin": false, 00:09:04.089 "nvme_io": false, 00:09:04.089 "nvme_io_md": false, 00:09:04.089 "write_zeroes": true, 00:09:04.089 "zcopy": true, 00:09:04.089 "get_zone_info": false, 00:09:04.089 "zone_management": false, 00:09:04.089 "zone_append": false, 00:09:04.089 "compare": false, 00:09:04.089 "compare_and_write": false, 00:09:04.089 "abort": true, 00:09:04.089 "seek_hole": false, 00:09:04.089 "seek_data": false, 00:09:04.089 "copy": true, 00:09:04.089 "nvme_iov_md": false 00:09:04.089 }, 00:09:04.089 "memory_domains": [ 00:09:04.089 { 00:09:04.089 "dma_device_id": "system", 00:09:04.089 "dma_device_type": 1 00:09:04.089 }, 00:09:04.089 { 00:09:04.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.089 "dma_device_type": 2 00:09:04.089 } 00:09:04.089 ], 00:09:04.089 "driver_specific": {} 00:09:04.089 } 00:09:04.089 ] 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.089 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.348 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.348 "name": "Existed_Raid", 00:09:04.348 "uuid": "157cad6c-9f7b-42c9-b169-066f01710d0e", 00:09:04.348 "strip_size_kb": 64, 00:09:04.348 "state": "online", 00:09:04.348 "raid_level": "raid0", 00:09:04.348 "superblock": false, 00:09:04.348 "num_base_bdevs": 3, 00:09:04.348 "num_base_bdevs_discovered": 3, 00:09:04.348 "num_base_bdevs_operational": 3, 00:09:04.348 "base_bdevs_list": [ 00:09:04.348 { 00:09:04.348 "name": "BaseBdev1", 00:09:04.348 "uuid": "79eb098a-4740-417f-ac54-e1fd24db9499", 00:09:04.348 "is_configured": true, 00:09:04.348 "data_offset": 0, 00:09:04.348 "data_size": 65536 00:09:04.348 }, 00:09:04.348 { 00:09:04.348 "name": "BaseBdev2", 00:09:04.348 "uuid": "f7737d1b-f5b6-4c87-ae97-a93d064cb038", 00:09:04.348 "is_configured": true, 00:09:04.348 "data_offset": 0, 00:09:04.348 "data_size": 65536 00:09:04.348 }, 00:09:04.348 { 00:09:04.348 "name": "BaseBdev3", 00:09:04.348 "uuid": "c3a9c123-840d-4cc5-9534-c383f8ff1105", 00:09:04.348 "is_configured": true, 00:09:04.348 "data_offset": 0, 00:09:04.348 "data_size": 65536 00:09:04.348 } 00:09:04.348 ] 00:09:04.348 }' 00:09:04.348 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.348 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.607 [2024-11-10 15:18:10.839501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.607 "name": "Existed_Raid", 00:09:04.607 "aliases": [ 00:09:04.607 "157cad6c-9f7b-42c9-b169-066f01710d0e" 00:09:04.607 ], 00:09:04.607 "product_name": "Raid Volume", 00:09:04.607 "block_size": 512, 00:09:04.607 "num_blocks": 196608, 00:09:04.607 "uuid": "157cad6c-9f7b-42c9-b169-066f01710d0e", 00:09:04.607 "assigned_rate_limits": { 00:09:04.607 "rw_ios_per_sec": 0, 00:09:04.607 "rw_mbytes_per_sec": 0, 00:09:04.607 "r_mbytes_per_sec": 0, 00:09:04.607 "w_mbytes_per_sec": 0 00:09:04.607 }, 00:09:04.607 "claimed": false, 00:09:04.607 "zoned": false, 00:09:04.607 "supported_io_types": { 00:09:04.607 "read": true, 00:09:04.607 "write": true, 00:09:04.607 "unmap": true, 00:09:04.607 "flush": true, 00:09:04.607 "reset": true, 00:09:04.607 "nvme_admin": false, 00:09:04.607 "nvme_io": false, 00:09:04.607 "nvme_io_md": false, 00:09:04.607 "write_zeroes": true, 00:09:04.607 "zcopy": false, 00:09:04.607 "get_zone_info": false, 00:09:04.607 "zone_management": false, 00:09:04.607 "zone_append": false, 00:09:04.607 "compare": false, 00:09:04.607 "compare_and_write": false, 00:09:04.607 "abort": false, 00:09:04.607 "seek_hole": false, 00:09:04.607 "seek_data": false, 00:09:04.607 "copy": false, 00:09:04.607 "nvme_iov_md": false 00:09:04.607 }, 00:09:04.607 "memory_domains": [ 00:09:04.607 { 00:09:04.607 "dma_device_id": "system", 00:09:04.607 "dma_device_type": 1 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.607 "dma_device_type": 2 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "dma_device_id": "system", 00:09:04.607 "dma_device_type": 1 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.607 "dma_device_type": 2 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "dma_device_id": "system", 00:09:04.607 "dma_device_type": 1 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.607 "dma_device_type": 2 00:09:04.607 } 00:09:04.607 ], 00:09:04.607 "driver_specific": { 00:09:04.607 "raid": { 00:09:04.607 "uuid": "157cad6c-9f7b-42c9-b169-066f01710d0e", 00:09:04.607 "strip_size_kb": 64, 00:09:04.607 "state": "online", 00:09:04.607 "raid_level": "raid0", 00:09:04.607 "superblock": false, 00:09:04.607 "num_base_bdevs": 3, 00:09:04.607 "num_base_bdevs_discovered": 3, 00:09:04.607 "num_base_bdevs_operational": 3, 00:09:04.607 "base_bdevs_list": [ 00:09:04.607 { 00:09:04.607 "name": "BaseBdev1", 00:09:04.607 "uuid": "79eb098a-4740-417f-ac54-e1fd24db9499", 00:09:04.607 "is_configured": true, 00:09:04.607 "data_offset": 0, 00:09:04.607 "data_size": 65536 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "name": "BaseBdev2", 00:09:04.607 "uuid": "f7737d1b-f5b6-4c87-ae97-a93d064cb038", 00:09:04.607 "is_configured": true, 00:09:04.607 "data_offset": 0, 00:09:04.607 "data_size": 65536 00:09:04.607 }, 00:09:04.607 { 00:09:04.607 "name": "BaseBdev3", 00:09:04.607 "uuid": "c3a9c123-840d-4cc5-9534-c383f8ff1105", 00:09:04.607 "is_configured": true, 00:09:04.607 "data_offset": 0, 00:09:04.607 "data_size": 65536 00:09:04.607 } 00:09:04.607 ] 00:09:04.607 } 00:09:04.607 } 00:09:04.607 }' 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:04.607 BaseBdev2 00:09:04.607 BaseBdev3' 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.607 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.867 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:04.867 15:18:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.867 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.867 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.867 15:18:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.867 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.867 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.867 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.867 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.868 [2024-11-10 15:18:11.119367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.868 [2024-11-10 15:18:11.119419] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.868 [2024-11-10 15:18:11.119493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.868 "name": "Existed_Raid", 00:09:04.868 "uuid": "157cad6c-9f7b-42c9-b169-066f01710d0e", 00:09:04.868 "strip_size_kb": 64, 00:09:04.868 "state": "offline", 00:09:04.868 "raid_level": "raid0", 00:09:04.868 "superblock": false, 00:09:04.868 "num_base_bdevs": 3, 00:09:04.868 "num_base_bdevs_discovered": 2, 00:09:04.868 "num_base_bdevs_operational": 2, 00:09:04.868 "base_bdevs_list": [ 00:09:04.868 { 00:09:04.868 "name": null, 00:09:04.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.868 "is_configured": false, 00:09:04.868 "data_offset": 0, 00:09:04.868 "data_size": 65536 00:09:04.868 }, 00:09:04.868 { 00:09:04.868 "name": "BaseBdev2", 00:09:04.868 "uuid": "f7737d1b-f5b6-4c87-ae97-a93d064cb038", 00:09:04.868 "is_configured": true, 00:09:04.868 "data_offset": 0, 00:09:04.868 "data_size": 65536 00:09:04.868 }, 00:09:04.868 { 00:09:04.868 "name": "BaseBdev3", 00:09:04.868 "uuid": "c3a9c123-840d-4cc5-9534-c383f8ff1105", 00:09:04.868 "is_configured": true, 00:09:04.868 "data_offset": 0, 00:09:04.868 "data_size": 65536 00:09:04.868 } 00:09:04.868 ] 00:09:04.868 }' 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.868 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.438 [2024-11-10 15:18:11.660170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.438 [2024-11-10 15:18:11.740732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:05.438 [2024-11-10 15:18:11.740809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.438 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 BaseBdev2 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 [ 00:09:05.699 { 00:09:05.699 "name": "BaseBdev2", 00:09:05.699 "aliases": [ 00:09:05.699 "242a9ea4-5b54-41ca-9b82-6afa364fa431" 00:09:05.699 ], 00:09:05.699 "product_name": "Malloc disk", 00:09:05.699 "block_size": 512, 00:09:05.699 "num_blocks": 65536, 00:09:05.699 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:05.699 "assigned_rate_limits": { 00:09:05.699 "rw_ios_per_sec": 0, 00:09:05.699 "rw_mbytes_per_sec": 0, 00:09:05.699 "r_mbytes_per_sec": 0, 00:09:05.699 "w_mbytes_per_sec": 0 00:09:05.699 }, 00:09:05.699 "claimed": false, 00:09:05.699 "zoned": false, 00:09:05.699 "supported_io_types": { 00:09:05.699 "read": true, 00:09:05.699 "write": true, 00:09:05.699 "unmap": true, 00:09:05.699 "flush": true, 00:09:05.699 "reset": true, 00:09:05.699 "nvme_admin": false, 00:09:05.699 "nvme_io": false, 00:09:05.699 "nvme_io_md": false, 00:09:05.699 "write_zeroes": true, 00:09:05.699 "zcopy": true, 00:09:05.699 "get_zone_info": false, 00:09:05.699 "zone_management": false, 00:09:05.699 "zone_append": false, 00:09:05.699 "compare": false, 00:09:05.699 "compare_and_write": false, 00:09:05.699 "abort": true, 00:09:05.699 "seek_hole": false, 00:09:05.699 "seek_data": false, 00:09:05.699 "copy": true, 00:09:05.699 "nvme_iov_md": false 00:09:05.699 }, 00:09:05.699 "memory_domains": [ 00:09:05.699 { 00:09:05.699 "dma_device_id": "system", 00:09:05.699 "dma_device_type": 1 00:09:05.699 }, 00:09:05.699 { 00:09:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.699 "dma_device_type": 2 00:09:05.699 } 00:09:05.699 ], 00:09:05.699 "driver_specific": {} 00:09:05.699 } 00:09:05.699 ] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 BaseBdev3 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 [ 00:09:05.699 { 00:09:05.699 "name": "BaseBdev3", 00:09:05.699 "aliases": [ 00:09:05.699 "8a6ac1ef-5639-4361-a9cc-07679f707bf8" 00:09:05.699 ], 00:09:05.699 "product_name": "Malloc disk", 00:09:05.699 "block_size": 512, 00:09:05.699 "num_blocks": 65536, 00:09:05.699 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:05.699 "assigned_rate_limits": { 00:09:05.699 "rw_ios_per_sec": 0, 00:09:05.699 "rw_mbytes_per_sec": 0, 00:09:05.699 "r_mbytes_per_sec": 0, 00:09:05.699 "w_mbytes_per_sec": 0 00:09:05.699 }, 00:09:05.699 "claimed": false, 00:09:05.699 "zoned": false, 00:09:05.699 "supported_io_types": { 00:09:05.699 "read": true, 00:09:05.699 "write": true, 00:09:05.699 "unmap": true, 00:09:05.699 "flush": true, 00:09:05.699 "reset": true, 00:09:05.699 "nvme_admin": false, 00:09:05.699 "nvme_io": false, 00:09:05.699 "nvme_io_md": false, 00:09:05.699 "write_zeroes": true, 00:09:05.699 "zcopy": true, 00:09:05.699 "get_zone_info": false, 00:09:05.699 "zone_management": false, 00:09:05.699 "zone_append": false, 00:09:05.699 "compare": false, 00:09:05.699 "compare_and_write": false, 00:09:05.699 "abort": true, 00:09:05.699 "seek_hole": false, 00:09:05.699 "seek_data": false, 00:09:05.699 "copy": true, 00:09:05.699 "nvme_iov_md": false 00:09:05.699 }, 00:09:05.699 "memory_domains": [ 00:09:05.699 { 00:09:05.699 "dma_device_id": "system", 00:09:05.699 "dma_device_type": 1 00:09:05.699 }, 00:09:05.699 { 00:09:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.699 "dma_device_type": 2 00:09:05.699 } 00:09:05.699 ], 00:09:05.699 "driver_specific": {} 00:09:05.699 } 00:09:05.699 ] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.699 [2024-11-10 15:18:11.938302] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:05.699 [2024-11-10 15:18:11.938357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:05.699 [2024-11-10 15:18:11.938377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.699 [2024-11-10 15:18:11.940480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.699 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.700 "name": "Existed_Raid", 00:09:05.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.700 "strip_size_kb": 64, 00:09:05.700 "state": "configuring", 00:09:05.700 "raid_level": "raid0", 00:09:05.700 "superblock": false, 00:09:05.700 "num_base_bdevs": 3, 00:09:05.700 "num_base_bdevs_discovered": 2, 00:09:05.700 "num_base_bdevs_operational": 3, 00:09:05.700 "base_bdevs_list": [ 00:09:05.700 { 00:09:05.700 "name": "BaseBdev1", 00:09:05.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.700 "is_configured": false, 00:09:05.700 "data_offset": 0, 00:09:05.700 "data_size": 0 00:09:05.700 }, 00:09:05.700 { 00:09:05.700 "name": "BaseBdev2", 00:09:05.700 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:05.700 "is_configured": true, 00:09:05.700 "data_offset": 0, 00:09:05.700 "data_size": 65536 00:09:05.700 }, 00:09:05.700 { 00:09:05.700 "name": "BaseBdev3", 00:09:05.700 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:05.700 "is_configured": true, 00:09:05.700 "data_offset": 0, 00:09:05.700 "data_size": 65536 00:09:05.700 } 00:09:05.700 ] 00:09:05.700 }' 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.700 15:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.269 [2024-11-10 15:18:12.374389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.269 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.270 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.270 "name": "Existed_Raid", 00:09:06.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.270 "strip_size_kb": 64, 00:09:06.270 "state": "configuring", 00:09:06.270 "raid_level": "raid0", 00:09:06.270 "superblock": false, 00:09:06.270 "num_base_bdevs": 3, 00:09:06.270 "num_base_bdevs_discovered": 1, 00:09:06.270 "num_base_bdevs_operational": 3, 00:09:06.270 "base_bdevs_list": [ 00:09:06.270 { 00:09:06.270 "name": "BaseBdev1", 00:09:06.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.270 "is_configured": false, 00:09:06.270 "data_offset": 0, 00:09:06.270 "data_size": 0 00:09:06.270 }, 00:09:06.270 { 00:09:06.270 "name": null, 00:09:06.270 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:06.270 "is_configured": false, 00:09:06.270 "data_offset": 0, 00:09:06.270 "data_size": 65536 00:09:06.270 }, 00:09:06.270 { 00:09:06.270 "name": "BaseBdev3", 00:09:06.270 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:06.270 "is_configured": true, 00:09:06.270 "data_offset": 0, 00:09:06.270 "data_size": 65536 00:09:06.270 } 00:09:06.270 ] 00:09:06.270 }' 00:09:06.270 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.270 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.530 [2024-11-10 15:18:12.855284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:06.530 BaseBdev1 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.530 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.530 [ 00:09:06.530 { 00:09:06.530 "name": "BaseBdev1", 00:09:06.530 "aliases": [ 00:09:06.530 "e1027972-4606-4297-aa4b-1e54ba3b1bbb" 00:09:06.530 ], 00:09:06.530 "product_name": "Malloc disk", 00:09:06.530 "block_size": 512, 00:09:06.530 "num_blocks": 65536, 00:09:06.530 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:06.530 "assigned_rate_limits": { 00:09:06.530 "rw_ios_per_sec": 0, 00:09:06.530 "rw_mbytes_per_sec": 0, 00:09:06.530 "r_mbytes_per_sec": 0, 00:09:06.530 "w_mbytes_per_sec": 0 00:09:06.530 }, 00:09:06.530 "claimed": true, 00:09:06.530 "claim_type": "exclusive_write", 00:09:06.530 "zoned": false, 00:09:06.530 "supported_io_types": { 00:09:06.530 "read": true, 00:09:06.530 "write": true, 00:09:06.530 "unmap": true, 00:09:06.530 "flush": true, 00:09:06.530 "reset": true, 00:09:06.530 "nvme_admin": false, 00:09:06.530 "nvme_io": false, 00:09:06.530 "nvme_io_md": false, 00:09:06.530 "write_zeroes": true, 00:09:06.530 "zcopy": true, 00:09:06.530 "get_zone_info": false, 00:09:06.530 "zone_management": false, 00:09:06.530 "zone_append": false, 00:09:06.530 "compare": false, 00:09:06.530 "compare_and_write": false, 00:09:06.530 "abort": true, 00:09:06.530 "seek_hole": false, 00:09:06.530 "seek_data": false, 00:09:06.530 "copy": true, 00:09:06.530 "nvme_iov_md": false 00:09:06.530 }, 00:09:06.530 "memory_domains": [ 00:09:06.530 { 00:09:06.530 "dma_device_id": "system", 00:09:06.790 "dma_device_type": 1 00:09:06.790 }, 00:09:06.790 { 00:09:06.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.790 "dma_device_type": 2 00:09:06.790 } 00:09:06.790 ], 00:09:06.790 "driver_specific": {} 00:09:06.790 } 00:09:06.790 ] 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.790 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.790 "name": "Existed_Raid", 00:09:06.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.791 "strip_size_kb": 64, 00:09:06.791 "state": "configuring", 00:09:06.791 "raid_level": "raid0", 00:09:06.791 "superblock": false, 00:09:06.791 "num_base_bdevs": 3, 00:09:06.791 "num_base_bdevs_discovered": 2, 00:09:06.791 "num_base_bdevs_operational": 3, 00:09:06.791 "base_bdevs_list": [ 00:09:06.791 { 00:09:06.791 "name": "BaseBdev1", 00:09:06.791 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:06.791 "is_configured": true, 00:09:06.791 "data_offset": 0, 00:09:06.791 "data_size": 65536 00:09:06.791 }, 00:09:06.791 { 00:09:06.791 "name": null, 00:09:06.791 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:06.791 "is_configured": false, 00:09:06.791 "data_offset": 0, 00:09:06.791 "data_size": 65536 00:09:06.791 }, 00:09:06.791 { 00:09:06.791 "name": "BaseBdev3", 00:09:06.791 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:06.791 "is_configured": true, 00:09:06.791 "data_offset": 0, 00:09:06.791 "data_size": 65536 00:09:06.791 } 00:09:06.791 ] 00:09:06.791 }' 00:09:06.791 15:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.791 15:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.051 [2024-11-10 15:18:13.363607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.051 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.311 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.311 "name": "Existed_Raid", 00:09:07.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.311 "strip_size_kb": 64, 00:09:07.311 "state": "configuring", 00:09:07.311 "raid_level": "raid0", 00:09:07.311 "superblock": false, 00:09:07.311 "num_base_bdevs": 3, 00:09:07.311 "num_base_bdevs_discovered": 1, 00:09:07.311 "num_base_bdevs_operational": 3, 00:09:07.311 "base_bdevs_list": [ 00:09:07.311 { 00:09:07.311 "name": "BaseBdev1", 00:09:07.311 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:07.311 "is_configured": true, 00:09:07.311 "data_offset": 0, 00:09:07.311 "data_size": 65536 00:09:07.311 }, 00:09:07.311 { 00:09:07.311 "name": null, 00:09:07.311 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:07.311 "is_configured": false, 00:09:07.311 "data_offset": 0, 00:09:07.311 "data_size": 65536 00:09:07.311 }, 00:09:07.311 { 00:09:07.311 "name": null, 00:09:07.311 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:07.311 "is_configured": false, 00:09:07.311 "data_offset": 0, 00:09:07.311 "data_size": 65536 00:09:07.311 } 00:09:07.311 ] 00:09:07.311 }' 00:09:07.311 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.311 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.570 [2024-11-10 15:18:13.839742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.570 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.571 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.571 "name": "Existed_Raid", 00:09:07.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.571 "strip_size_kb": 64, 00:09:07.571 "state": "configuring", 00:09:07.571 "raid_level": "raid0", 00:09:07.571 "superblock": false, 00:09:07.571 "num_base_bdevs": 3, 00:09:07.571 "num_base_bdevs_discovered": 2, 00:09:07.571 "num_base_bdevs_operational": 3, 00:09:07.571 "base_bdevs_list": [ 00:09:07.571 { 00:09:07.571 "name": "BaseBdev1", 00:09:07.571 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:07.571 "is_configured": true, 00:09:07.571 "data_offset": 0, 00:09:07.571 "data_size": 65536 00:09:07.571 }, 00:09:07.571 { 00:09:07.571 "name": null, 00:09:07.571 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:07.571 "is_configured": false, 00:09:07.571 "data_offset": 0, 00:09:07.571 "data_size": 65536 00:09:07.571 }, 00:09:07.571 { 00:09:07.571 "name": "BaseBdev3", 00:09:07.571 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:07.571 "is_configured": true, 00:09:07.571 "data_offset": 0, 00:09:07.571 "data_size": 65536 00:09:07.571 } 00:09:07.571 ] 00:09:07.571 }' 00:09:07.571 15:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.571 15:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 [2024-11-10 15:18:14.323807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.140 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.140 "name": "Existed_Raid", 00:09:08.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.140 "strip_size_kb": 64, 00:09:08.140 "state": "configuring", 00:09:08.140 "raid_level": "raid0", 00:09:08.140 "superblock": false, 00:09:08.140 "num_base_bdevs": 3, 00:09:08.140 "num_base_bdevs_discovered": 1, 00:09:08.140 "num_base_bdevs_operational": 3, 00:09:08.140 "base_bdevs_list": [ 00:09:08.140 { 00:09:08.140 "name": null, 00:09:08.140 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:08.140 "is_configured": false, 00:09:08.140 "data_offset": 0, 00:09:08.140 "data_size": 65536 00:09:08.140 }, 00:09:08.140 { 00:09:08.140 "name": null, 00:09:08.140 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:08.140 "is_configured": false, 00:09:08.140 "data_offset": 0, 00:09:08.141 "data_size": 65536 00:09:08.141 }, 00:09:08.141 { 00:09:08.141 "name": "BaseBdev3", 00:09:08.141 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:08.141 "is_configured": true, 00:09:08.141 "data_offset": 0, 00:09:08.141 "data_size": 65536 00:09:08.141 } 00:09:08.141 ] 00:09:08.141 }' 00:09:08.141 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.141 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.401 [2024-11-10 15:18:14.746968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.401 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.660 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.660 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.660 "name": "Existed_Raid", 00:09:08.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.660 "strip_size_kb": 64, 00:09:08.660 "state": "configuring", 00:09:08.660 "raid_level": "raid0", 00:09:08.660 "superblock": false, 00:09:08.660 "num_base_bdevs": 3, 00:09:08.661 "num_base_bdevs_discovered": 2, 00:09:08.661 "num_base_bdevs_operational": 3, 00:09:08.661 "base_bdevs_list": [ 00:09:08.661 { 00:09:08.661 "name": null, 00:09:08.661 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:08.661 "is_configured": false, 00:09:08.661 "data_offset": 0, 00:09:08.661 "data_size": 65536 00:09:08.661 }, 00:09:08.661 { 00:09:08.661 "name": "BaseBdev2", 00:09:08.661 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:08.661 "is_configured": true, 00:09:08.661 "data_offset": 0, 00:09:08.661 "data_size": 65536 00:09:08.661 }, 00:09:08.661 { 00:09:08.661 "name": "BaseBdev3", 00:09:08.661 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:08.661 "is_configured": true, 00:09:08.661 "data_offset": 0, 00:09:08.661 "data_size": 65536 00:09:08.661 } 00:09:08.661 ] 00:09:08.661 }' 00:09:08.661 15:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.661 15:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e1027972-4606-4297-aa4b-1e54ba3b1bbb 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.920 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.179 [2024-11-10 15:18:15.283966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:09.179 [2024-11-10 15:18:15.284038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.179 [2024-11-10 15:18:15.284048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:09.179 [2024-11-10 15:18:15.284341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:09.179 [2024-11-10 15:18:15.284480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.179 [2024-11-10 15:18:15.284497] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:09.179 [2024-11-10 15:18:15.284692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.179 NewBaseBdev 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.179 [ 00:09:09.179 { 00:09:09.179 "name": "NewBaseBdev", 00:09:09.179 "aliases": [ 00:09:09.179 "e1027972-4606-4297-aa4b-1e54ba3b1bbb" 00:09:09.179 ], 00:09:09.179 "product_name": "Malloc disk", 00:09:09.179 "block_size": 512, 00:09:09.179 "num_blocks": 65536, 00:09:09.179 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:09.179 "assigned_rate_limits": { 00:09:09.179 "rw_ios_per_sec": 0, 00:09:09.179 "rw_mbytes_per_sec": 0, 00:09:09.179 "r_mbytes_per_sec": 0, 00:09:09.179 "w_mbytes_per_sec": 0 00:09:09.179 }, 00:09:09.179 "claimed": true, 00:09:09.179 "claim_type": "exclusive_write", 00:09:09.179 "zoned": false, 00:09:09.179 "supported_io_types": { 00:09:09.179 "read": true, 00:09:09.179 "write": true, 00:09:09.179 "unmap": true, 00:09:09.179 "flush": true, 00:09:09.179 "reset": true, 00:09:09.179 "nvme_admin": false, 00:09:09.179 "nvme_io": false, 00:09:09.179 "nvme_io_md": false, 00:09:09.179 "write_zeroes": true, 00:09:09.179 "zcopy": true, 00:09:09.179 "get_zone_info": false, 00:09:09.179 "zone_management": false, 00:09:09.179 "zone_append": false, 00:09:09.179 "compare": false, 00:09:09.179 "compare_and_write": false, 00:09:09.179 "abort": true, 00:09:09.179 "seek_hole": false, 00:09:09.179 "seek_data": false, 00:09:09.179 "copy": true, 00:09:09.179 "nvme_iov_md": false 00:09:09.179 }, 00:09:09.179 "memory_domains": [ 00:09:09.179 { 00:09:09.179 "dma_device_id": "system", 00:09:09.179 "dma_device_type": 1 00:09:09.179 }, 00:09:09.179 { 00:09:09.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.179 "dma_device_type": 2 00:09:09.179 } 00:09:09.179 ], 00:09:09.179 "driver_specific": {} 00:09:09.179 } 00:09:09.179 ] 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:09.179 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.180 "name": "Existed_Raid", 00:09:09.180 "uuid": "8655edfb-a6d7-420d-97ab-efc0091692ab", 00:09:09.180 "strip_size_kb": 64, 00:09:09.180 "state": "online", 00:09:09.180 "raid_level": "raid0", 00:09:09.180 "superblock": false, 00:09:09.180 "num_base_bdevs": 3, 00:09:09.180 "num_base_bdevs_discovered": 3, 00:09:09.180 "num_base_bdevs_operational": 3, 00:09:09.180 "base_bdevs_list": [ 00:09:09.180 { 00:09:09.180 "name": "NewBaseBdev", 00:09:09.180 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:09.180 "is_configured": true, 00:09:09.180 "data_offset": 0, 00:09:09.180 "data_size": 65536 00:09:09.180 }, 00:09:09.180 { 00:09:09.180 "name": "BaseBdev2", 00:09:09.180 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:09.180 "is_configured": true, 00:09:09.180 "data_offset": 0, 00:09:09.180 "data_size": 65536 00:09:09.180 }, 00:09:09.180 { 00:09:09.180 "name": "BaseBdev3", 00:09:09.180 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:09.180 "is_configured": true, 00:09:09.180 "data_offset": 0, 00:09:09.180 "data_size": 65536 00:09:09.180 } 00:09:09.180 ] 00:09:09.180 }' 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.180 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.439 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.439 [2024-11-10 15:18:15.780535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.699 "name": "Existed_Raid", 00:09:09.699 "aliases": [ 00:09:09.699 "8655edfb-a6d7-420d-97ab-efc0091692ab" 00:09:09.699 ], 00:09:09.699 "product_name": "Raid Volume", 00:09:09.699 "block_size": 512, 00:09:09.699 "num_blocks": 196608, 00:09:09.699 "uuid": "8655edfb-a6d7-420d-97ab-efc0091692ab", 00:09:09.699 "assigned_rate_limits": { 00:09:09.699 "rw_ios_per_sec": 0, 00:09:09.699 "rw_mbytes_per_sec": 0, 00:09:09.699 "r_mbytes_per_sec": 0, 00:09:09.699 "w_mbytes_per_sec": 0 00:09:09.699 }, 00:09:09.699 "claimed": false, 00:09:09.699 "zoned": false, 00:09:09.699 "supported_io_types": { 00:09:09.699 "read": true, 00:09:09.699 "write": true, 00:09:09.699 "unmap": true, 00:09:09.699 "flush": true, 00:09:09.699 "reset": true, 00:09:09.699 "nvme_admin": false, 00:09:09.699 "nvme_io": false, 00:09:09.699 "nvme_io_md": false, 00:09:09.699 "write_zeroes": true, 00:09:09.699 "zcopy": false, 00:09:09.699 "get_zone_info": false, 00:09:09.699 "zone_management": false, 00:09:09.699 "zone_append": false, 00:09:09.699 "compare": false, 00:09:09.699 "compare_and_write": false, 00:09:09.699 "abort": false, 00:09:09.699 "seek_hole": false, 00:09:09.699 "seek_data": false, 00:09:09.699 "copy": false, 00:09:09.699 "nvme_iov_md": false 00:09:09.699 }, 00:09:09.699 "memory_domains": [ 00:09:09.699 { 00:09:09.699 "dma_device_id": "system", 00:09:09.699 "dma_device_type": 1 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.699 "dma_device_type": 2 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "dma_device_id": "system", 00:09:09.699 "dma_device_type": 1 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.699 "dma_device_type": 2 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "dma_device_id": "system", 00:09:09.699 "dma_device_type": 1 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.699 "dma_device_type": 2 00:09:09.699 } 00:09:09.699 ], 00:09:09.699 "driver_specific": { 00:09:09.699 "raid": { 00:09:09.699 "uuid": "8655edfb-a6d7-420d-97ab-efc0091692ab", 00:09:09.699 "strip_size_kb": 64, 00:09:09.699 "state": "online", 00:09:09.699 "raid_level": "raid0", 00:09:09.699 "superblock": false, 00:09:09.699 "num_base_bdevs": 3, 00:09:09.699 "num_base_bdevs_discovered": 3, 00:09:09.699 "num_base_bdevs_operational": 3, 00:09:09.699 "base_bdevs_list": [ 00:09:09.699 { 00:09:09.699 "name": "NewBaseBdev", 00:09:09.699 "uuid": "e1027972-4606-4297-aa4b-1e54ba3b1bbb", 00:09:09.699 "is_configured": true, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 65536 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "name": "BaseBdev2", 00:09:09.699 "uuid": "242a9ea4-5b54-41ca-9b82-6afa364fa431", 00:09:09.699 "is_configured": true, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 65536 00:09:09.699 }, 00:09:09.699 { 00:09:09.699 "name": "BaseBdev3", 00:09:09.699 "uuid": "8a6ac1ef-5639-4361-a9cc-07679f707bf8", 00:09:09.699 "is_configured": true, 00:09:09.699 "data_offset": 0, 00:09:09.699 "data_size": 65536 00:09:09.699 } 00:09:09.699 ] 00:09:09.699 } 00:09:09.699 } 00:09:09.699 }' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:09.699 BaseBdev2 00:09:09.699 BaseBdev3' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 15:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 [2024-11-10 15:18:16.032239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:09.699 [2024-11-10 15:18:16.032285] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.699 [2024-11-10 15:18:16.032371] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:09.699 [2024-11-10 15:18:16.032438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:09.699 [2024-11-10 15:18:16.032453] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76417 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 76417 ']' 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 76417 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:09.699 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76417 00:09:09.959 killing process with pid 76417 00:09:09.959 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:09.959 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:09.959 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76417' 00:09:09.959 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 76417 00:09:09.959 [2024-11-10 15:18:16.069406] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:09.959 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 76417 00:09:09.959 [2024-11-10 15:18:16.128411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:10.219 00:09:10.219 real 0m8.921s 00:09:10.219 user 0m15.125s 00:09:10.219 sys 0m1.734s 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.219 ************************************ 00:09:10.219 END TEST raid_state_function_test 00:09:10.219 ************************************ 00:09:10.219 15:18:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:10.219 15:18:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:10.219 15:18:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.219 15:18:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:10.219 ************************************ 00:09:10.219 START TEST raid_state_function_test_sb 00:09:10.219 ************************************ 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 3 true 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77022 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77022' 00:09:10.219 Process raid pid: 77022 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77022 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 77022 ']' 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:10.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:10.219 15:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.488 [2024-11-10 15:18:16.618707] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:10.488 [2024-11-10 15:18:16.618851] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.488 [2024-11-10 15:18:16.755322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:10.488 [2024-11-10 15:18:16.794581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.488 [2024-11-10 15:18:16.834397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.764 [2024-11-10 15:18:16.910979] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:10.764 [2024-11-10 15:18:16.911031] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.338 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.339 [2024-11-10 15:18:17.447495] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.339 [2024-11-10 15:18:17.447562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.339 [2024-11-10 15:18:17.447578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.339 [2024-11-10 15:18:17.447587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.339 [2024-11-10 15:18:17.447602] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.339 [2024-11-10 15:18:17.447610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.339 "name": "Existed_Raid", 00:09:11.339 "uuid": "da13fc70-da7f-41ee-aecd-032a799946c7", 00:09:11.339 "strip_size_kb": 64, 00:09:11.339 "state": "configuring", 00:09:11.339 "raid_level": "raid0", 00:09:11.339 "superblock": true, 00:09:11.339 "num_base_bdevs": 3, 00:09:11.339 "num_base_bdevs_discovered": 0, 00:09:11.339 "num_base_bdevs_operational": 3, 00:09:11.339 "base_bdevs_list": [ 00:09:11.339 { 00:09:11.339 "name": "BaseBdev1", 00:09:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.339 "is_configured": false, 00:09:11.339 "data_offset": 0, 00:09:11.339 "data_size": 0 00:09:11.339 }, 00:09:11.339 { 00:09:11.339 "name": "BaseBdev2", 00:09:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.339 "is_configured": false, 00:09:11.339 "data_offset": 0, 00:09:11.339 "data_size": 0 00:09:11.339 }, 00:09:11.339 { 00:09:11.339 "name": "BaseBdev3", 00:09:11.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.339 "is_configured": false, 00:09:11.339 "data_offset": 0, 00:09:11.339 "data_size": 0 00:09:11.339 } 00:09:11.339 ] 00:09:11.339 }' 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.339 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.599 [2024-11-10 15:18:17.843541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.599 [2024-11-10 15:18:17.843598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.599 [2024-11-10 15:18:17.851545] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:11.599 [2024-11-10 15:18:17.851609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:11.599 [2024-11-10 15:18:17.851622] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:11.599 [2024-11-10 15:18:17.851630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:11.599 [2024-11-10 15:18:17.851639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:11.599 [2024-11-10 15:18:17.851646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.599 [2024-11-10 15:18:17.874585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:11.599 BaseBdev1 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.599 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.599 [ 00:09:11.599 { 00:09:11.599 "name": "BaseBdev1", 00:09:11.599 "aliases": [ 00:09:11.599 "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6" 00:09:11.599 ], 00:09:11.599 "product_name": "Malloc disk", 00:09:11.599 "block_size": 512, 00:09:11.599 "num_blocks": 65536, 00:09:11.599 "uuid": "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6", 00:09:11.599 "assigned_rate_limits": { 00:09:11.599 "rw_ios_per_sec": 0, 00:09:11.599 "rw_mbytes_per_sec": 0, 00:09:11.599 "r_mbytes_per_sec": 0, 00:09:11.599 "w_mbytes_per_sec": 0 00:09:11.599 }, 00:09:11.599 "claimed": true, 00:09:11.599 "claim_type": "exclusive_write", 00:09:11.599 "zoned": false, 00:09:11.599 "supported_io_types": { 00:09:11.599 "read": true, 00:09:11.599 "write": true, 00:09:11.599 "unmap": true, 00:09:11.599 "flush": true, 00:09:11.599 "reset": true, 00:09:11.599 "nvme_admin": false, 00:09:11.599 "nvme_io": false, 00:09:11.599 "nvme_io_md": false, 00:09:11.599 "write_zeroes": true, 00:09:11.599 "zcopy": true, 00:09:11.599 "get_zone_info": false, 00:09:11.599 "zone_management": false, 00:09:11.599 "zone_append": false, 00:09:11.599 "compare": false, 00:09:11.599 "compare_and_write": false, 00:09:11.599 "abort": true, 00:09:11.599 "seek_hole": false, 00:09:11.599 "seek_data": false, 00:09:11.599 "copy": true, 00:09:11.599 "nvme_iov_md": false 00:09:11.599 }, 00:09:11.599 "memory_domains": [ 00:09:11.599 { 00:09:11.599 "dma_device_id": "system", 00:09:11.599 "dma_device_type": 1 00:09:11.599 }, 00:09:11.599 { 00:09:11.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.599 "dma_device_type": 2 00:09:11.599 } 00:09:11.599 ], 00:09:11.599 "driver_specific": {} 00:09:11.599 } 00:09:11.600 ] 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.600 "name": "Existed_Raid", 00:09:11.600 "uuid": "aff6d9fe-a05b-4984-833b-20c7e267f2ad", 00:09:11.600 "strip_size_kb": 64, 00:09:11.600 "state": "configuring", 00:09:11.600 "raid_level": "raid0", 00:09:11.600 "superblock": true, 00:09:11.600 "num_base_bdevs": 3, 00:09:11.600 "num_base_bdevs_discovered": 1, 00:09:11.600 "num_base_bdevs_operational": 3, 00:09:11.600 "base_bdevs_list": [ 00:09:11.600 { 00:09:11.600 "name": "BaseBdev1", 00:09:11.600 "uuid": "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6", 00:09:11.600 "is_configured": true, 00:09:11.600 "data_offset": 2048, 00:09:11.600 "data_size": 63488 00:09:11.600 }, 00:09:11.600 { 00:09:11.600 "name": "BaseBdev2", 00:09:11.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.600 "is_configured": false, 00:09:11.600 "data_offset": 0, 00:09:11.600 "data_size": 0 00:09:11.600 }, 00:09:11.600 { 00:09:11.600 "name": "BaseBdev3", 00:09:11.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.600 "is_configured": false, 00:09:11.600 "data_offset": 0, 00:09:11.600 "data_size": 0 00:09:11.600 } 00:09:11.600 ] 00:09:11.600 }' 00:09:11.600 15:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.860 15:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.120 [2024-11-10 15:18:18.314785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.120 [2024-11-10 15:18:18.314881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.120 [2024-11-10 15:18:18.326798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.120 [2024-11-10 15:18:18.329050] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.120 [2024-11-10 15:18:18.329095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.120 [2024-11-10 15:18:18.329108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.120 [2024-11-10 15:18:18.329115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.120 "name": "Existed_Raid", 00:09:12.120 "uuid": "0c7720ee-548f-4166-aa70-e7d75b6bc8d7", 00:09:12.120 "strip_size_kb": 64, 00:09:12.120 "state": "configuring", 00:09:12.120 "raid_level": "raid0", 00:09:12.120 "superblock": true, 00:09:12.120 "num_base_bdevs": 3, 00:09:12.120 "num_base_bdevs_discovered": 1, 00:09:12.120 "num_base_bdevs_operational": 3, 00:09:12.120 "base_bdevs_list": [ 00:09:12.120 { 00:09:12.120 "name": "BaseBdev1", 00:09:12.120 "uuid": "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6", 00:09:12.120 "is_configured": true, 00:09:12.120 "data_offset": 2048, 00:09:12.120 "data_size": 63488 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "name": "BaseBdev2", 00:09:12.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.120 "is_configured": false, 00:09:12.120 "data_offset": 0, 00:09:12.120 "data_size": 0 00:09:12.120 }, 00:09:12.120 { 00:09:12.120 "name": "BaseBdev3", 00:09:12.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.120 "is_configured": false, 00:09:12.120 "data_offset": 0, 00:09:12.120 "data_size": 0 00:09:12.120 } 00:09:12.120 ] 00:09:12.120 }' 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.120 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.689 [2024-11-10 15:18:18.799831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:12.689 BaseBdev2 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.689 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.689 [ 00:09:12.689 { 00:09:12.689 "name": "BaseBdev2", 00:09:12.689 "aliases": [ 00:09:12.689 "0b9e482d-0fa3-4901-a46d-eaf1a712abc6" 00:09:12.689 ], 00:09:12.689 "product_name": "Malloc disk", 00:09:12.689 "block_size": 512, 00:09:12.689 "num_blocks": 65536, 00:09:12.689 "uuid": "0b9e482d-0fa3-4901-a46d-eaf1a712abc6", 00:09:12.689 "assigned_rate_limits": { 00:09:12.689 "rw_ios_per_sec": 0, 00:09:12.689 "rw_mbytes_per_sec": 0, 00:09:12.689 "r_mbytes_per_sec": 0, 00:09:12.689 "w_mbytes_per_sec": 0 00:09:12.689 }, 00:09:12.689 "claimed": true, 00:09:12.689 "claim_type": "exclusive_write", 00:09:12.689 "zoned": false, 00:09:12.689 "supported_io_types": { 00:09:12.689 "read": true, 00:09:12.689 "write": true, 00:09:12.689 "unmap": true, 00:09:12.689 "flush": true, 00:09:12.689 "reset": true, 00:09:12.689 "nvme_admin": false, 00:09:12.689 "nvme_io": false, 00:09:12.689 "nvme_io_md": false, 00:09:12.689 "write_zeroes": true, 00:09:12.690 "zcopy": true, 00:09:12.690 "get_zone_info": false, 00:09:12.690 "zone_management": false, 00:09:12.690 "zone_append": false, 00:09:12.690 "compare": false, 00:09:12.690 "compare_and_write": false, 00:09:12.690 "abort": true, 00:09:12.690 "seek_hole": false, 00:09:12.690 "seek_data": false, 00:09:12.690 "copy": true, 00:09:12.690 "nvme_iov_md": false 00:09:12.690 }, 00:09:12.690 "memory_domains": [ 00:09:12.690 { 00:09:12.690 "dma_device_id": "system", 00:09:12.690 "dma_device_type": 1 00:09:12.690 }, 00:09:12.690 { 00:09:12.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.690 "dma_device_type": 2 00:09:12.690 } 00:09:12.690 ], 00:09:12.690 "driver_specific": {} 00:09:12.690 } 00:09:12.690 ] 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.690 "name": "Existed_Raid", 00:09:12.690 "uuid": "0c7720ee-548f-4166-aa70-e7d75b6bc8d7", 00:09:12.690 "strip_size_kb": 64, 00:09:12.690 "state": "configuring", 00:09:12.690 "raid_level": "raid0", 00:09:12.690 "superblock": true, 00:09:12.690 "num_base_bdevs": 3, 00:09:12.690 "num_base_bdevs_discovered": 2, 00:09:12.690 "num_base_bdevs_operational": 3, 00:09:12.690 "base_bdevs_list": [ 00:09:12.690 { 00:09:12.690 "name": "BaseBdev1", 00:09:12.690 "uuid": "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6", 00:09:12.690 "is_configured": true, 00:09:12.690 "data_offset": 2048, 00:09:12.690 "data_size": 63488 00:09:12.690 }, 00:09:12.690 { 00:09:12.690 "name": "BaseBdev2", 00:09:12.690 "uuid": "0b9e482d-0fa3-4901-a46d-eaf1a712abc6", 00:09:12.690 "is_configured": true, 00:09:12.690 "data_offset": 2048, 00:09:12.690 "data_size": 63488 00:09:12.690 }, 00:09:12.690 { 00:09:12.690 "name": "BaseBdev3", 00:09:12.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.690 "is_configured": false, 00:09:12.690 "data_offset": 0, 00:09:12.690 "data_size": 0 00:09:12.690 } 00:09:12.690 ] 00:09:12.690 }' 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.690 15:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.950 [2024-11-10 15:18:19.263335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.950 [2024-11-10 15:18:19.263597] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:12.950 [2024-11-10 15:18:19.263632] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:12.950 [2024-11-10 15:18:19.264074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:12.950 [2024-11-10 15:18:19.264249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:12.950 [2024-11-10 15:18:19.264272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:12.950 BaseBdev3 00:09:12.950 [2024-11-10 15:18:19.264415] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:12.950 [ 00:09:12.950 { 00:09:12.950 "name": "BaseBdev3", 00:09:12.950 "aliases": [ 00:09:12.950 "bf0f01f6-94a1-4814-81c3-f4f91318d843" 00:09:12.950 ], 00:09:12.950 "product_name": "Malloc disk", 00:09:12.950 "block_size": 512, 00:09:12.950 "num_blocks": 65536, 00:09:12.950 "uuid": "bf0f01f6-94a1-4814-81c3-f4f91318d843", 00:09:12.950 "assigned_rate_limits": { 00:09:12.950 "rw_ios_per_sec": 0, 00:09:12.950 "rw_mbytes_per_sec": 0, 00:09:12.950 "r_mbytes_per_sec": 0, 00:09:12.950 "w_mbytes_per_sec": 0 00:09:12.950 }, 00:09:12.950 "claimed": true, 00:09:12.950 "claim_type": "exclusive_write", 00:09:12.950 "zoned": false, 00:09:12.950 "supported_io_types": { 00:09:12.950 "read": true, 00:09:12.950 "write": true, 00:09:12.950 "unmap": true, 00:09:12.950 "flush": true, 00:09:12.950 "reset": true, 00:09:12.950 "nvme_admin": false, 00:09:12.950 "nvme_io": false, 00:09:12.950 "nvme_io_md": false, 00:09:12.950 "write_zeroes": true, 00:09:12.950 "zcopy": true, 00:09:12.950 "get_zone_info": false, 00:09:12.950 "zone_management": false, 00:09:12.950 "zone_append": false, 00:09:12.950 "compare": false, 00:09:12.950 "compare_and_write": false, 00:09:12.950 "abort": true, 00:09:12.950 "seek_hole": false, 00:09:12.950 "seek_data": false, 00:09:12.950 "copy": true, 00:09:12.950 "nvme_iov_md": false 00:09:12.950 }, 00:09:12.950 "memory_domains": [ 00:09:12.950 { 00:09:12.950 "dma_device_id": "system", 00:09:12.950 "dma_device_type": 1 00:09:12.950 }, 00:09:12.950 { 00:09:12.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.950 "dma_device_type": 2 00:09:12.950 } 00:09:12.950 ], 00:09:12.950 "driver_specific": {} 00:09:12.950 } 00:09:12.950 ] 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.950 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.951 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.210 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.211 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.211 "name": "Existed_Raid", 00:09:13.211 "uuid": "0c7720ee-548f-4166-aa70-e7d75b6bc8d7", 00:09:13.211 "strip_size_kb": 64, 00:09:13.211 "state": "online", 00:09:13.211 "raid_level": "raid0", 00:09:13.211 "superblock": true, 00:09:13.211 "num_base_bdevs": 3, 00:09:13.211 "num_base_bdevs_discovered": 3, 00:09:13.211 "num_base_bdevs_operational": 3, 00:09:13.211 "base_bdevs_list": [ 00:09:13.211 { 00:09:13.211 "name": "BaseBdev1", 00:09:13.211 "uuid": "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6", 00:09:13.211 "is_configured": true, 00:09:13.211 "data_offset": 2048, 00:09:13.211 "data_size": 63488 00:09:13.211 }, 00:09:13.211 { 00:09:13.211 "name": "BaseBdev2", 00:09:13.211 "uuid": "0b9e482d-0fa3-4901-a46d-eaf1a712abc6", 00:09:13.211 "is_configured": true, 00:09:13.211 "data_offset": 2048, 00:09:13.211 "data_size": 63488 00:09:13.211 }, 00:09:13.211 { 00:09:13.211 "name": "BaseBdev3", 00:09:13.211 "uuid": "bf0f01f6-94a1-4814-81c3-f4f91318d843", 00:09:13.211 "is_configured": true, 00:09:13.211 "data_offset": 2048, 00:09:13.211 "data_size": 63488 00:09:13.211 } 00:09:13.211 ] 00:09:13.211 }' 00:09:13.211 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.211 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.469 [2024-11-10 15:18:19.759829] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.469 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.469 "name": "Existed_Raid", 00:09:13.469 "aliases": [ 00:09:13.469 "0c7720ee-548f-4166-aa70-e7d75b6bc8d7" 00:09:13.469 ], 00:09:13.469 "product_name": "Raid Volume", 00:09:13.469 "block_size": 512, 00:09:13.469 "num_blocks": 190464, 00:09:13.469 "uuid": "0c7720ee-548f-4166-aa70-e7d75b6bc8d7", 00:09:13.469 "assigned_rate_limits": { 00:09:13.469 "rw_ios_per_sec": 0, 00:09:13.469 "rw_mbytes_per_sec": 0, 00:09:13.469 "r_mbytes_per_sec": 0, 00:09:13.469 "w_mbytes_per_sec": 0 00:09:13.469 }, 00:09:13.469 "claimed": false, 00:09:13.469 "zoned": false, 00:09:13.469 "supported_io_types": { 00:09:13.469 "read": true, 00:09:13.469 "write": true, 00:09:13.469 "unmap": true, 00:09:13.469 "flush": true, 00:09:13.469 "reset": true, 00:09:13.469 "nvme_admin": false, 00:09:13.469 "nvme_io": false, 00:09:13.469 "nvme_io_md": false, 00:09:13.469 "write_zeroes": true, 00:09:13.469 "zcopy": false, 00:09:13.469 "get_zone_info": false, 00:09:13.469 "zone_management": false, 00:09:13.469 "zone_append": false, 00:09:13.469 "compare": false, 00:09:13.469 "compare_and_write": false, 00:09:13.469 "abort": false, 00:09:13.469 "seek_hole": false, 00:09:13.469 "seek_data": false, 00:09:13.469 "copy": false, 00:09:13.470 "nvme_iov_md": false 00:09:13.470 }, 00:09:13.470 "memory_domains": [ 00:09:13.470 { 00:09:13.470 "dma_device_id": "system", 00:09:13.470 "dma_device_type": 1 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.470 "dma_device_type": 2 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "dma_device_id": "system", 00:09:13.470 "dma_device_type": 1 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.470 "dma_device_type": 2 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "dma_device_id": "system", 00:09:13.470 "dma_device_type": 1 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.470 "dma_device_type": 2 00:09:13.470 } 00:09:13.470 ], 00:09:13.470 "driver_specific": { 00:09:13.470 "raid": { 00:09:13.470 "uuid": "0c7720ee-548f-4166-aa70-e7d75b6bc8d7", 00:09:13.470 "strip_size_kb": 64, 00:09:13.470 "state": "online", 00:09:13.470 "raid_level": "raid0", 00:09:13.470 "superblock": true, 00:09:13.470 "num_base_bdevs": 3, 00:09:13.470 "num_base_bdevs_discovered": 3, 00:09:13.470 "num_base_bdevs_operational": 3, 00:09:13.470 "base_bdevs_list": [ 00:09:13.470 { 00:09:13.470 "name": "BaseBdev1", 00:09:13.470 "uuid": "3f5fe3cf-4195-4157-9a0e-8adf8404f0d6", 00:09:13.470 "is_configured": true, 00:09:13.470 "data_offset": 2048, 00:09:13.470 "data_size": 63488 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "name": "BaseBdev2", 00:09:13.470 "uuid": "0b9e482d-0fa3-4901-a46d-eaf1a712abc6", 00:09:13.470 "is_configured": true, 00:09:13.470 "data_offset": 2048, 00:09:13.470 "data_size": 63488 00:09:13.470 }, 00:09:13.470 { 00:09:13.470 "name": "BaseBdev3", 00:09:13.470 "uuid": "bf0f01f6-94a1-4814-81c3-f4f91318d843", 00:09:13.470 "is_configured": true, 00:09:13.470 "data_offset": 2048, 00:09:13.470 "data_size": 63488 00:09:13.470 } 00:09:13.470 ] 00:09:13.470 } 00:09:13.470 } 00:09:13.470 }' 00:09:13.470 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:13.729 BaseBdev2 00:09:13.729 BaseBdev3' 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.729 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.730 15:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.730 [2024-11-10 15:18:19.987612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.730 [2024-11-10 15:18:19.987661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.730 [2024-11-10 15:18:19.987737] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.730 "name": "Existed_Raid", 00:09:13.730 "uuid": "0c7720ee-548f-4166-aa70-e7d75b6bc8d7", 00:09:13.730 "strip_size_kb": 64, 00:09:13.730 "state": "offline", 00:09:13.730 "raid_level": "raid0", 00:09:13.730 "superblock": true, 00:09:13.730 "num_base_bdevs": 3, 00:09:13.730 "num_base_bdevs_discovered": 2, 00:09:13.730 "num_base_bdevs_operational": 2, 00:09:13.730 "base_bdevs_list": [ 00:09:13.730 { 00:09:13.730 "name": null, 00:09:13.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.730 "is_configured": false, 00:09:13.730 "data_offset": 0, 00:09:13.730 "data_size": 63488 00:09:13.730 }, 00:09:13.730 { 00:09:13.730 "name": "BaseBdev2", 00:09:13.730 "uuid": "0b9e482d-0fa3-4901-a46d-eaf1a712abc6", 00:09:13.730 "is_configured": true, 00:09:13.730 "data_offset": 2048, 00:09:13.730 "data_size": 63488 00:09:13.730 }, 00:09:13.730 { 00:09:13.730 "name": "BaseBdev3", 00:09:13.730 "uuid": "bf0f01f6-94a1-4814-81c3-f4f91318d843", 00:09:13.730 "is_configured": true, 00:09:13.730 "data_offset": 2048, 00:09:13.730 "data_size": 63488 00:09:13.730 } 00:09:13.730 ] 00:09:13.730 }' 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.730 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 [2024-11-10 15:18:20.504281] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 [2024-11-10 15:18:20.580660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.299 [2024-11-10 15:18:20.580729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.299 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.559 BaseBdev2 00:09:14.559 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.559 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 [ 00:09:14.560 { 00:09:14.560 "name": "BaseBdev2", 00:09:14.560 "aliases": [ 00:09:14.560 "c80cb756-fa5e-475e-b7ca-4a470b292219" 00:09:14.560 ], 00:09:14.560 "product_name": "Malloc disk", 00:09:14.560 "block_size": 512, 00:09:14.560 "num_blocks": 65536, 00:09:14.560 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:14.560 "assigned_rate_limits": { 00:09:14.560 "rw_ios_per_sec": 0, 00:09:14.560 "rw_mbytes_per_sec": 0, 00:09:14.560 "r_mbytes_per_sec": 0, 00:09:14.560 "w_mbytes_per_sec": 0 00:09:14.560 }, 00:09:14.560 "claimed": false, 00:09:14.560 "zoned": false, 00:09:14.560 "supported_io_types": { 00:09:14.560 "read": true, 00:09:14.560 "write": true, 00:09:14.560 "unmap": true, 00:09:14.560 "flush": true, 00:09:14.560 "reset": true, 00:09:14.560 "nvme_admin": false, 00:09:14.560 "nvme_io": false, 00:09:14.560 "nvme_io_md": false, 00:09:14.560 "write_zeroes": true, 00:09:14.560 "zcopy": true, 00:09:14.560 "get_zone_info": false, 00:09:14.560 "zone_management": false, 00:09:14.560 "zone_append": false, 00:09:14.560 "compare": false, 00:09:14.560 "compare_and_write": false, 00:09:14.560 "abort": true, 00:09:14.560 "seek_hole": false, 00:09:14.560 "seek_data": false, 00:09:14.560 "copy": true, 00:09:14.560 "nvme_iov_md": false 00:09:14.560 }, 00:09:14.560 "memory_domains": [ 00:09:14.560 { 00:09:14.560 "dma_device_id": "system", 00:09:14.560 "dma_device_type": 1 00:09:14.560 }, 00:09:14.560 { 00:09:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.560 "dma_device_type": 2 00:09:14.560 } 00:09:14.560 ], 00:09:14.560 "driver_specific": {} 00:09:14.560 } 00:09:14.560 ] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 BaseBdev3 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 [ 00:09:14.560 { 00:09:14.560 "name": "BaseBdev3", 00:09:14.560 "aliases": [ 00:09:14.560 "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9" 00:09:14.560 ], 00:09:14.560 "product_name": "Malloc disk", 00:09:14.560 "block_size": 512, 00:09:14.560 "num_blocks": 65536, 00:09:14.560 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:14.560 "assigned_rate_limits": { 00:09:14.560 "rw_ios_per_sec": 0, 00:09:14.560 "rw_mbytes_per_sec": 0, 00:09:14.560 "r_mbytes_per_sec": 0, 00:09:14.560 "w_mbytes_per_sec": 0 00:09:14.560 }, 00:09:14.560 "claimed": false, 00:09:14.560 "zoned": false, 00:09:14.560 "supported_io_types": { 00:09:14.560 "read": true, 00:09:14.560 "write": true, 00:09:14.560 "unmap": true, 00:09:14.560 "flush": true, 00:09:14.560 "reset": true, 00:09:14.560 "nvme_admin": false, 00:09:14.560 "nvme_io": false, 00:09:14.560 "nvme_io_md": false, 00:09:14.560 "write_zeroes": true, 00:09:14.560 "zcopy": true, 00:09:14.560 "get_zone_info": false, 00:09:14.560 "zone_management": false, 00:09:14.560 "zone_append": false, 00:09:14.560 "compare": false, 00:09:14.560 "compare_and_write": false, 00:09:14.560 "abort": true, 00:09:14.560 "seek_hole": false, 00:09:14.560 "seek_data": false, 00:09:14.560 "copy": true, 00:09:14.560 "nvme_iov_md": false 00:09:14.560 }, 00:09:14.560 "memory_domains": [ 00:09:14.560 { 00:09:14.560 "dma_device_id": "system", 00:09:14.560 "dma_device_type": 1 00:09:14.560 }, 00:09:14.560 { 00:09:14.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.560 "dma_device_type": 2 00:09:14.560 } 00:09:14.560 ], 00:09:14.560 "driver_specific": {} 00:09:14.560 } 00:09:14.560 ] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 [2024-11-10 15:18:20.787278] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.560 [2024-11-10 15:18:20.787432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.560 [2024-11-10 15:18:20.787471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.560 [2024-11-10 15:18:20.789654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.560 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.560 "name": "Existed_Raid", 00:09:14.560 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:14.560 "strip_size_kb": 64, 00:09:14.560 "state": "configuring", 00:09:14.560 "raid_level": "raid0", 00:09:14.560 "superblock": true, 00:09:14.560 "num_base_bdevs": 3, 00:09:14.560 "num_base_bdevs_discovered": 2, 00:09:14.560 "num_base_bdevs_operational": 3, 00:09:14.560 "base_bdevs_list": [ 00:09:14.560 { 00:09:14.560 "name": "BaseBdev1", 00:09:14.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.560 "is_configured": false, 00:09:14.560 "data_offset": 0, 00:09:14.560 "data_size": 0 00:09:14.560 }, 00:09:14.560 { 00:09:14.560 "name": "BaseBdev2", 00:09:14.561 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:14.561 "is_configured": true, 00:09:14.561 "data_offset": 2048, 00:09:14.561 "data_size": 63488 00:09:14.561 }, 00:09:14.561 { 00:09:14.561 "name": "BaseBdev3", 00:09:14.561 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:14.561 "is_configured": true, 00:09:14.561 "data_offset": 2048, 00:09:14.561 "data_size": 63488 00:09:14.561 } 00:09:14.561 ] 00:09:14.561 }' 00:09:14.561 15:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.561 15:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.129 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:15.129 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.129 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.129 [2024-11-10 15:18:21.227374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.130 "name": "Existed_Raid", 00:09:15.130 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:15.130 "strip_size_kb": 64, 00:09:15.130 "state": "configuring", 00:09:15.130 "raid_level": "raid0", 00:09:15.130 "superblock": true, 00:09:15.130 "num_base_bdevs": 3, 00:09:15.130 "num_base_bdevs_discovered": 1, 00:09:15.130 "num_base_bdevs_operational": 3, 00:09:15.130 "base_bdevs_list": [ 00:09:15.130 { 00:09:15.130 "name": "BaseBdev1", 00:09:15.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.130 "is_configured": false, 00:09:15.130 "data_offset": 0, 00:09:15.130 "data_size": 0 00:09:15.130 }, 00:09:15.130 { 00:09:15.130 "name": null, 00:09:15.130 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:15.130 "is_configured": false, 00:09:15.130 "data_offset": 0, 00:09:15.130 "data_size": 63488 00:09:15.130 }, 00:09:15.130 { 00:09:15.130 "name": "BaseBdev3", 00:09:15.130 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:15.130 "is_configured": true, 00:09:15.130 "data_offset": 2048, 00:09:15.130 "data_size": 63488 00:09:15.130 } 00:09:15.130 ] 00:09:15.130 }' 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.130 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 [2024-11-10 15:18:21.728419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:15.390 BaseBdev1 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.390 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 [ 00:09:15.650 { 00:09:15.650 "name": "BaseBdev1", 00:09:15.650 "aliases": [ 00:09:15.650 "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f" 00:09:15.650 ], 00:09:15.650 "product_name": "Malloc disk", 00:09:15.650 "block_size": 512, 00:09:15.650 "num_blocks": 65536, 00:09:15.650 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:15.650 "assigned_rate_limits": { 00:09:15.650 "rw_ios_per_sec": 0, 00:09:15.650 "rw_mbytes_per_sec": 0, 00:09:15.650 "r_mbytes_per_sec": 0, 00:09:15.650 "w_mbytes_per_sec": 0 00:09:15.650 }, 00:09:15.650 "claimed": true, 00:09:15.650 "claim_type": "exclusive_write", 00:09:15.650 "zoned": false, 00:09:15.650 "supported_io_types": { 00:09:15.650 "read": true, 00:09:15.650 "write": true, 00:09:15.650 "unmap": true, 00:09:15.650 "flush": true, 00:09:15.650 "reset": true, 00:09:15.650 "nvme_admin": false, 00:09:15.650 "nvme_io": false, 00:09:15.650 "nvme_io_md": false, 00:09:15.650 "write_zeroes": true, 00:09:15.650 "zcopy": true, 00:09:15.650 "get_zone_info": false, 00:09:15.650 "zone_management": false, 00:09:15.650 "zone_append": false, 00:09:15.650 "compare": false, 00:09:15.650 "compare_and_write": false, 00:09:15.650 "abort": true, 00:09:15.650 "seek_hole": false, 00:09:15.650 "seek_data": false, 00:09:15.650 "copy": true, 00:09:15.650 "nvme_iov_md": false 00:09:15.650 }, 00:09:15.650 "memory_domains": [ 00:09:15.650 { 00:09:15.650 "dma_device_id": "system", 00:09:15.650 "dma_device_type": 1 00:09:15.650 }, 00:09:15.650 { 00:09:15.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.650 "dma_device_type": 2 00:09:15.650 } 00:09:15.650 ], 00:09:15.650 "driver_specific": {} 00:09:15.650 } 00:09:15.650 ] 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.650 "name": "Existed_Raid", 00:09:15.650 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:15.650 "strip_size_kb": 64, 00:09:15.650 "state": "configuring", 00:09:15.650 "raid_level": "raid0", 00:09:15.650 "superblock": true, 00:09:15.650 "num_base_bdevs": 3, 00:09:15.650 "num_base_bdevs_discovered": 2, 00:09:15.650 "num_base_bdevs_operational": 3, 00:09:15.650 "base_bdevs_list": [ 00:09:15.650 { 00:09:15.650 "name": "BaseBdev1", 00:09:15.650 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:15.650 "is_configured": true, 00:09:15.650 "data_offset": 2048, 00:09:15.650 "data_size": 63488 00:09:15.650 }, 00:09:15.650 { 00:09:15.650 "name": null, 00:09:15.650 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:15.650 "is_configured": false, 00:09:15.650 "data_offset": 0, 00:09:15.650 "data_size": 63488 00:09:15.650 }, 00:09:15.650 { 00:09:15.650 "name": "BaseBdev3", 00:09:15.650 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:15.650 "is_configured": true, 00:09:15.650 "data_offset": 2048, 00:09:15.650 "data_size": 63488 00:09:15.650 } 00:09:15.650 ] 00:09:15.650 }' 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.650 15:18:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:15.910 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:15.911 [2024-11-10 15:18:22.244633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.911 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.171 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.171 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.171 "name": "Existed_Raid", 00:09:16.171 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:16.171 "strip_size_kb": 64, 00:09:16.171 "state": "configuring", 00:09:16.171 "raid_level": "raid0", 00:09:16.171 "superblock": true, 00:09:16.171 "num_base_bdevs": 3, 00:09:16.171 "num_base_bdevs_discovered": 1, 00:09:16.171 "num_base_bdevs_operational": 3, 00:09:16.171 "base_bdevs_list": [ 00:09:16.171 { 00:09:16.171 "name": "BaseBdev1", 00:09:16.171 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:16.171 "is_configured": true, 00:09:16.171 "data_offset": 2048, 00:09:16.171 "data_size": 63488 00:09:16.171 }, 00:09:16.171 { 00:09:16.171 "name": null, 00:09:16.171 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:16.171 "is_configured": false, 00:09:16.171 "data_offset": 0, 00:09:16.171 "data_size": 63488 00:09:16.171 }, 00:09:16.171 { 00:09:16.171 "name": null, 00:09:16.171 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:16.171 "is_configured": false, 00:09:16.171 "data_offset": 0, 00:09:16.171 "data_size": 63488 00:09:16.171 } 00:09:16.171 ] 00:09:16.171 }' 00:09:16.171 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.171 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.431 [2024-11-10 15:18:22.732830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.431 "name": "Existed_Raid", 00:09:16.431 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:16.431 "strip_size_kb": 64, 00:09:16.431 "state": "configuring", 00:09:16.431 "raid_level": "raid0", 00:09:16.431 "superblock": true, 00:09:16.431 "num_base_bdevs": 3, 00:09:16.431 "num_base_bdevs_discovered": 2, 00:09:16.431 "num_base_bdevs_operational": 3, 00:09:16.431 "base_bdevs_list": [ 00:09:16.431 { 00:09:16.431 "name": "BaseBdev1", 00:09:16.431 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:16.431 "is_configured": true, 00:09:16.431 "data_offset": 2048, 00:09:16.431 "data_size": 63488 00:09:16.431 }, 00:09:16.431 { 00:09:16.431 "name": null, 00:09:16.431 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:16.431 "is_configured": false, 00:09:16.431 "data_offset": 0, 00:09:16.431 "data_size": 63488 00:09:16.431 }, 00:09:16.431 { 00:09:16.431 "name": "BaseBdev3", 00:09:16.431 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:16.431 "is_configured": true, 00:09:16.431 "data_offset": 2048, 00:09:16.431 "data_size": 63488 00:09:16.431 } 00:09:16.431 ] 00:09:16.431 }' 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.431 15:18:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.002 [2024-11-10 15:18:23.196960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.002 "name": "Existed_Raid", 00:09:17.002 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:17.002 "strip_size_kb": 64, 00:09:17.002 "state": "configuring", 00:09:17.002 "raid_level": "raid0", 00:09:17.002 "superblock": true, 00:09:17.002 "num_base_bdevs": 3, 00:09:17.002 "num_base_bdevs_discovered": 1, 00:09:17.002 "num_base_bdevs_operational": 3, 00:09:17.002 "base_bdevs_list": [ 00:09:17.002 { 00:09:17.002 "name": null, 00:09:17.002 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:17.002 "is_configured": false, 00:09:17.002 "data_offset": 0, 00:09:17.002 "data_size": 63488 00:09:17.002 }, 00:09:17.002 { 00:09:17.002 "name": null, 00:09:17.002 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:17.002 "is_configured": false, 00:09:17.002 "data_offset": 0, 00:09:17.002 "data_size": 63488 00:09:17.002 }, 00:09:17.002 { 00:09:17.002 "name": "BaseBdev3", 00:09:17.002 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:17.002 "is_configured": true, 00:09:17.002 "data_offset": 2048, 00:09:17.002 "data_size": 63488 00:09:17.002 } 00:09:17.002 ] 00:09:17.002 }' 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.002 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.262 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.262 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.262 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.262 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.262 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 [2024-11-10 15:18:23.660829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.522 "name": "Existed_Raid", 00:09:17.522 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:17.522 "strip_size_kb": 64, 00:09:17.522 "state": "configuring", 00:09:17.522 "raid_level": "raid0", 00:09:17.522 "superblock": true, 00:09:17.522 "num_base_bdevs": 3, 00:09:17.522 "num_base_bdevs_discovered": 2, 00:09:17.522 "num_base_bdevs_operational": 3, 00:09:17.522 "base_bdevs_list": [ 00:09:17.522 { 00:09:17.522 "name": null, 00:09:17.522 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:17.522 "is_configured": false, 00:09:17.522 "data_offset": 0, 00:09:17.522 "data_size": 63488 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "name": "BaseBdev2", 00:09:17.522 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 2048, 00:09:17.522 "data_size": 63488 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "name": "BaseBdev3", 00:09:17.522 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 2048, 00:09:17.522 "data_size": 63488 00:09:17.522 } 00:09:17.522 ] 00:09:17.522 }' 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.522 15:18:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.782 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.782 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.782 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.782 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.782 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.042 [2024-11-10 15:18:24.221645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:18.042 [2024-11-10 15:18:24.221840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.042 [2024-11-10 15:18:24.221854] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:18.042 NewBaseBdev 00:09:18.042 [2024-11-10 15:18:24.222163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:18.042 [2024-11-10 15:18:24.222289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.042 [2024-11-10 15:18:24.222320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:18.042 [2024-11-10 15:18:24.222433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.042 [ 00:09:18.042 { 00:09:18.042 "name": "NewBaseBdev", 00:09:18.042 "aliases": [ 00:09:18.042 "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f" 00:09:18.042 ], 00:09:18.042 "product_name": "Malloc disk", 00:09:18.042 "block_size": 512, 00:09:18.042 "num_blocks": 65536, 00:09:18.042 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:18.042 "assigned_rate_limits": { 00:09:18.042 "rw_ios_per_sec": 0, 00:09:18.042 "rw_mbytes_per_sec": 0, 00:09:18.042 "r_mbytes_per_sec": 0, 00:09:18.042 "w_mbytes_per_sec": 0 00:09:18.042 }, 00:09:18.042 "claimed": true, 00:09:18.042 "claim_type": "exclusive_write", 00:09:18.042 "zoned": false, 00:09:18.042 "supported_io_types": { 00:09:18.042 "read": true, 00:09:18.042 "write": true, 00:09:18.042 "unmap": true, 00:09:18.042 "flush": true, 00:09:18.042 "reset": true, 00:09:18.042 "nvme_admin": false, 00:09:18.042 "nvme_io": false, 00:09:18.042 "nvme_io_md": false, 00:09:18.042 "write_zeroes": true, 00:09:18.042 "zcopy": true, 00:09:18.042 "get_zone_info": false, 00:09:18.042 "zone_management": false, 00:09:18.042 "zone_append": false, 00:09:18.042 "compare": false, 00:09:18.042 "compare_and_write": false, 00:09:18.042 "abort": true, 00:09:18.042 "seek_hole": false, 00:09:18.042 "seek_data": false, 00:09:18.042 "copy": true, 00:09:18.042 "nvme_iov_md": false 00:09:18.042 }, 00:09:18.042 "memory_domains": [ 00:09:18.042 { 00:09:18.042 "dma_device_id": "system", 00:09:18.042 "dma_device_type": 1 00:09:18.042 }, 00:09:18.042 { 00:09:18.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.042 "dma_device_type": 2 00:09:18.042 } 00:09:18.042 ], 00:09:18.042 "driver_specific": {} 00:09:18.042 } 00:09:18.042 ] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.042 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.042 "name": "Existed_Raid", 00:09:18.042 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:18.042 "strip_size_kb": 64, 00:09:18.042 "state": "online", 00:09:18.042 "raid_level": "raid0", 00:09:18.042 "superblock": true, 00:09:18.043 "num_base_bdevs": 3, 00:09:18.043 "num_base_bdevs_discovered": 3, 00:09:18.043 "num_base_bdevs_operational": 3, 00:09:18.043 "base_bdevs_list": [ 00:09:18.043 { 00:09:18.043 "name": "NewBaseBdev", 00:09:18.043 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:18.043 "is_configured": true, 00:09:18.043 "data_offset": 2048, 00:09:18.043 "data_size": 63488 00:09:18.043 }, 00:09:18.043 { 00:09:18.043 "name": "BaseBdev2", 00:09:18.043 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:18.043 "is_configured": true, 00:09:18.043 "data_offset": 2048, 00:09:18.043 "data_size": 63488 00:09:18.043 }, 00:09:18.043 { 00:09:18.043 "name": "BaseBdev3", 00:09:18.043 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:18.043 "is_configured": true, 00:09:18.043 "data_offset": 2048, 00:09:18.043 "data_size": 63488 00:09:18.043 } 00:09:18.043 ] 00:09:18.043 }' 00:09:18.043 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.043 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 [2024-11-10 15:18:24.706226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:18.613 "name": "Existed_Raid", 00:09:18.613 "aliases": [ 00:09:18.613 "d442eedf-a58f-47d9-9d17-57c26434d25d" 00:09:18.613 ], 00:09:18.613 "product_name": "Raid Volume", 00:09:18.613 "block_size": 512, 00:09:18.613 "num_blocks": 190464, 00:09:18.613 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:18.613 "assigned_rate_limits": { 00:09:18.613 "rw_ios_per_sec": 0, 00:09:18.613 "rw_mbytes_per_sec": 0, 00:09:18.613 "r_mbytes_per_sec": 0, 00:09:18.613 "w_mbytes_per_sec": 0 00:09:18.613 }, 00:09:18.613 "claimed": false, 00:09:18.613 "zoned": false, 00:09:18.613 "supported_io_types": { 00:09:18.613 "read": true, 00:09:18.613 "write": true, 00:09:18.613 "unmap": true, 00:09:18.613 "flush": true, 00:09:18.613 "reset": true, 00:09:18.613 "nvme_admin": false, 00:09:18.613 "nvme_io": false, 00:09:18.613 "nvme_io_md": false, 00:09:18.613 "write_zeroes": true, 00:09:18.613 "zcopy": false, 00:09:18.613 "get_zone_info": false, 00:09:18.613 "zone_management": false, 00:09:18.613 "zone_append": false, 00:09:18.613 "compare": false, 00:09:18.613 "compare_and_write": false, 00:09:18.613 "abort": false, 00:09:18.613 "seek_hole": false, 00:09:18.613 "seek_data": false, 00:09:18.613 "copy": false, 00:09:18.613 "nvme_iov_md": false 00:09:18.613 }, 00:09:18.613 "memory_domains": [ 00:09:18.613 { 00:09:18.613 "dma_device_id": "system", 00:09:18.613 "dma_device_type": 1 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.613 "dma_device_type": 2 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "dma_device_id": "system", 00:09:18.613 "dma_device_type": 1 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.613 "dma_device_type": 2 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "dma_device_id": "system", 00:09:18.613 "dma_device_type": 1 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.613 "dma_device_type": 2 00:09:18.613 } 00:09:18.613 ], 00:09:18.613 "driver_specific": { 00:09:18.613 "raid": { 00:09:18.613 "uuid": "d442eedf-a58f-47d9-9d17-57c26434d25d", 00:09:18.613 "strip_size_kb": 64, 00:09:18.613 "state": "online", 00:09:18.613 "raid_level": "raid0", 00:09:18.613 "superblock": true, 00:09:18.613 "num_base_bdevs": 3, 00:09:18.613 "num_base_bdevs_discovered": 3, 00:09:18.613 "num_base_bdevs_operational": 3, 00:09:18.613 "base_bdevs_list": [ 00:09:18.613 { 00:09:18.613 "name": "NewBaseBdev", 00:09:18.613 "uuid": "4b4f723f-4e1d-4cac-9e72-2866d9ba7e9f", 00:09:18.613 "is_configured": true, 00:09:18.613 "data_offset": 2048, 00:09:18.613 "data_size": 63488 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "name": "BaseBdev2", 00:09:18.613 "uuid": "c80cb756-fa5e-475e-b7ca-4a470b292219", 00:09:18.613 "is_configured": true, 00:09:18.613 "data_offset": 2048, 00:09:18.613 "data_size": 63488 00:09:18.613 }, 00:09:18.613 { 00:09:18.613 "name": "BaseBdev3", 00:09:18.613 "uuid": "b315674d-a50b-4ee2-bc56-13c1a1a4c8a9", 00:09:18.613 "is_configured": true, 00:09:18.613 "data_offset": 2048, 00:09:18.613 "data_size": 63488 00:09:18.613 } 00:09:18.613 ] 00:09:18.613 } 00:09:18.613 } 00:09:18.613 }' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:18.613 BaseBdev2 00:09:18.613 BaseBdev3' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:18.613 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.874 [2024-11-10 15:18:24.977907] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.874 [2024-11-10 15:18:24.977951] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:18.874 [2024-11-10 15:18:24.978053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:18.874 [2024-11-10 15:18:24.978118] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:18.874 [2024-11-10 15:18:24.978129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77022 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 77022 ']' 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 77022 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:18.874 15:18:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77022 00:09:18.874 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:18.874 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:18.874 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77022' 00:09:18.874 killing process with pid 77022 00:09:18.874 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 77022 00:09:18.874 [2024-11-10 15:18:25.027874] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:18.874 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 77022 00:09:18.874 [2024-11-10 15:18:25.085149] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.134 15:18:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:19.134 00:09:19.134 real 0m8.882s 00:09:19.134 user 0m14.938s 00:09:19.134 sys 0m1.812s 00:09:19.134 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:19.134 15:18:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.134 ************************************ 00:09:19.134 END TEST raid_state_function_test_sb 00:09:19.134 ************************************ 00:09:19.134 15:18:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:19.134 15:18:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:19.134 15:18:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:19.134 15:18:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.134 ************************************ 00:09:19.134 START TEST raid_superblock_test 00:09:19.134 ************************************ 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 3 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77620 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77620 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 77620 ']' 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:19.134 15:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.394 [2024-11-10 15:18:25.566778] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:19.394 [2024-11-10 15:18:25.566909] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77620 ] 00:09:19.394 [2024-11-10 15:18:25.701005] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:19.394 [2024-11-10 15:18:25.737692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.683 [2024-11-10 15:18:25.780117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.683 [2024-11-10 15:18:25.857518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.683 [2024-11-10 15:18:25.857568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 malloc1 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 [2024-11-10 15:18:26.428749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:20.253 [2024-11-10 15:18:26.428827] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.253 [2024-11-10 15:18:26.428850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:20.253 [2024-11-10 15:18:26.428872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.253 [2024-11-10 15:18:26.431390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.253 [2024-11-10 15:18:26.431427] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:20.253 pt1 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 malloc2 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 [2024-11-10 15:18:26.463336] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:20.253 [2024-11-10 15:18:26.463465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.253 [2024-11-10 15:18:26.463502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:20.253 [2024-11-10 15:18:26.463529] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.253 [2024-11-10 15:18:26.465915] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.253 [2024-11-10 15:18:26.465986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:20.253 pt2 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 malloc3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 [2024-11-10 15:18:26.501801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:20.253 [2024-11-10 15:18:26.501895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.253 [2024-11-10 15:18:26.501931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:20.253 [2024-11-10 15:18:26.501959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.253 [2024-11-10 15:18:26.504373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.253 [2024-11-10 15:18:26.504443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:20.253 pt3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 [2024-11-10 15:18:26.513846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:20.253 [2024-11-10 15:18:26.516020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:20.253 [2024-11-10 15:18:26.516098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:20.253 [2024-11-10 15:18:26.516238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:20.253 [2024-11-10 15:18:26.516251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:20.253 [2024-11-10 15:18:26.516523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:20.253 [2024-11-10 15:18:26.516662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:20.253 [2024-11-10 15:18:26.516671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:20.253 [2024-11-10 15:18:26.516795] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.253 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.254 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.254 "name": "raid_bdev1", 00:09:20.254 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:20.254 "strip_size_kb": 64, 00:09:20.254 "state": "online", 00:09:20.254 "raid_level": "raid0", 00:09:20.254 "superblock": true, 00:09:20.254 "num_base_bdevs": 3, 00:09:20.254 "num_base_bdevs_discovered": 3, 00:09:20.254 "num_base_bdevs_operational": 3, 00:09:20.254 "base_bdevs_list": [ 00:09:20.254 { 00:09:20.254 "name": "pt1", 00:09:20.254 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "name": "pt2", 00:09:20.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 }, 00:09:20.254 { 00:09:20.254 "name": "pt3", 00:09:20.254 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.254 "is_configured": true, 00:09:20.254 "data_offset": 2048, 00:09:20.254 "data_size": 63488 00:09:20.254 } 00:09:20.254 ] 00:09:20.254 }' 00:09:20.254 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.254 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.822 [2024-11-10 15:18:26.946284] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.822 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.822 "name": "raid_bdev1", 00:09:20.822 "aliases": [ 00:09:20.822 "b2654131-e096-4328-adb2-994aad4155aa" 00:09:20.822 ], 00:09:20.822 "product_name": "Raid Volume", 00:09:20.822 "block_size": 512, 00:09:20.822 "num_blocks": 190464, 00:09:20.822 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:20.822 "assigned_rate_limits": { 00:09:20.822 "rw_ios_per_sec": 0, 00:09:20.822 "rw_mbytes_per_sec": 0, 00:09:20.822 "r_mbytes_per_sec": 0, 00:09:20.822 "w_mbytes_per_sec": 0 00:09:20.822 }, 00:09:20.822 "claimed": false, 00:09:20.822 "zoned": false, 00:09:20.822 "supported_io_types": { 00:09:20.822 "read": true, 00:09:20.822 "write": true, 00:09:20.822 "unmap": true, 00:09:20.822 "flush": true, 00:09:20.822 "reset": true, 00:09:20.822 "nvme_admin": false, 00:09:20.822 "nvme_io": false, 00:09:20.822 "nvme_io_md": false, 00:09:20.822 "write_zeroes": true, 00:09:20.822 "zcopy": false, 00:09:20.822 "get_zone_info": false, 00:09:20.822 "zone_management": false, 00:09:20.822 "zone_append": false, 00:09:20.822 "compare": false, 00:09:20.822 "compare_and_write": false, 00:09:20.822 "abort": false, 00:09:20.822 "seek_hole": false, 00:09:20.822 "seek_data": false, 00:09:20.822 "copy": false, 00:09:20.822 "nvme_iov_md": false 00:09:20.822 }, 00:09:20.822 "memory_domains": [ 00:09:20.822 { 00:09:20.822 "dma_device_id": "system", 00:09:20.822 "dma_device_type": 1 00:09:20.822 }, 00:09:20.822 { 00:09:20.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.822 "dma_device_type": 2 00:09:20.822 }, 00:09:20.822 { 00:09:20.822 "dma_device_id": "system", 00:09:20.822 "dma_device_type": 1 00:09:20.822 }, 00:09:20.822 { 00:09:20.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.822 "dma_device_type": 2 00:09:20.822 }, 00:09:20.822 { 00:09:20.822 "dma_device_id": "system", 00:09:20.822 "dma_device_type": 1 00:09:20.822 }, 00:09:20.822 { 00:09:20.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.822 "dma_device_type": 2 00:09:20.822 } 00:09:20.822 ], 00:09:20.822 "driver_specific": { 00:09:20.822 "raid": { 00:09:20.822 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:20.822 "strip_size_kb": 64, 00:09:20.822 "state": "online", 00:09:20.823 "raid_level": "raid0", 00:09:20.823 "superblock": true, 00:09:20.823 "num_base_bdevs": 3, 00:09:20.823 "num_base_bdevs_discovered": 3, 00:09:20.823 "num_base_bdevs_operational": 3, 00:09:20.823 "base_bdevs_list": [ 00:09:20.823 { 00:09:20.823 "name": "pt1", 00:09:20.823 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:20.823 "is_configured": true, 00:09:20.823 "data_offset": 2048, 00:09:20.823 "data_size": 63488 00:09:20.823 }, 00:09:20.823 { 00:09:20.823 "name": "pt2", 00:09:20.823 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:20.823 "is_configured": true, 00:09:20.823 "data_offset": 2048, 00:09:20.823 "data_size": 63488 00:09:20.823 }, 00:09:20.823 { 00:09:20.823 "name": "pt3", 00:09:20.823 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:20.823 "is_configured": true, 00:09:20.823 "data_offset": 2048, 00:09:20.823 "data_size": 63488 00:09:20.823 } 00:09:20.823 ] 00:09:20.823 } 00:09:20.823 } 00:09:20.823 }' 00:09:20.823 15:18:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:20.823 pt2 00:09:20.823 pt3' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.823 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.082 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.082 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.082 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:21.082 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:21.082 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.082 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 [2024-11-10 15:18:27.218265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2654131-e096-4328-adb2-994aad4155aa 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b2654131-e096-4328-adb2-994aad4155aa ']' 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 [2024-11-10 15:18:27.261973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.083 [2024-11-10 15:18:27.262023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:21.083 [2024-11-10 15:18:27.262121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:21.083 [2024-11-10 15:18:27.262189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:21.083 [2024-11-10 15:18:27.262202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 [2024-11-10 15:18:27.398078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:21.083 [2024-11-10 15:18:27.400267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:21.083 [2024-11-10 15:18:27.400401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:21.083 [2024-11-10 15:18:27.400467] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:21.083 [2024-11-10 15:18:27.400521] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:21.083 [2024-11-10 15:18:27.400540] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:21.083 [2024-11-10 15:18:27.400555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:21.083 [2024-11-10 15:18:27.400570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:21.083 request: 00:09:21.083 { 00:09:21.083 "name": "raid_bdev1", 00:09:21.083 "raid_level": "raid0", 00:09:21.083 "base_bdevs": [ 00:09:21.083 "malloc1", 00:09:21.083 "malloc2", 00:09:21.083 "malloc3" 00:09:21.083 ], 00:09:21.083 "strip_size_kb": 64, 00:09:21.083 "superblock": false, 00:09:21.083 "method": "bdev_raid_create", 00:09:21.083 "req_id": 1 00:09:21.083 } 00:09:21.083 Got JSON-RPC error response 00:09:21.083 response: 00:09:21.083 { 00:09:21.083 "code": -17, 00:09:21.083 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:21.083 } 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.083 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.343 [2024-11-10 15:18:27.462041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:21.343 [2024-11-10 15:18:27.462137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.343 [2024-11-10 15:18:27.462173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:21.343 [2024-11-10 15:18:27.462211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.343 [2024-11-10 15:18:27.464734] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.343 [2024-11-10 15:18:27.464804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:21.343 [2024-11-10 15:18:27.464897] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:21.343 [2024-11-10 15:18:27.464957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:21.343 pt1 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.343 "name": "raid_bdev1", 00:09:21.343 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:21.343 "strip_size_kb": 64, 00:09:21.343 "state": "configuring", 00:09:21.343 "raid_level": "raid0", 00:09:21.343 "superblock": true, 00:09:21.343 "num_base_bdevs": 3, 00:09:21.343 "num_base_bdevs_discovered": 1, 00:09:21.343 "num_base_bdevs_operational": 3, 00:09:21.343 "base_bdevs_list": [ 00:09:21.343 { 00:09:21.343 "name": "pt1", 00:09:21.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.343 "is_configured": true, 00:09:21.343 "data_offset": 2048, 00:09:21.343 "data_size": 63488 00:09:21.343 }, 00:09:21.343 { 00:09:21.343 "name": null, 00:09:21.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.343 "is_configured": false, 00:09:21.343 "data_offset": 2048, 00:09:21.343 "data_size": 63488 00:09:21.343 }, 00:09:21.343 { 00:09:21.343 "name": null, 00:09:21.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.343 "is_configured": false, 00:09:21.343 "data_offset": 2048, 00:09:21.343 "data_size": 63488 00:09:21.343 } 00:09:21.343 ] 00:09:21.343 }' 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.343 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.603 [2024-11-10 15:18:27.866229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:21.603 [2024-11-10 15:18:27.866400] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:21.603 [2024-11-10 15:18:27.866436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:21.603 [2024-11-10 15:18:27.866446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:21.603 [2024-11-10 15:18:27.866921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:21.603 [2024-11-10 15:18:27.866945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:21.603 [2024-11-10 15:18:27.867053] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:21.603 [2024-11-10 15:18:27.867079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:21.603 pt2 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.603 [2024-11-10 15:18:27.878250] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.603 "name": "raid_bdev1", 00:09:21.603 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:21.603 "strip_size_kb": 64, 00:09:21.603 "state": "configuring", 00:09:21.603 "raid_level": "raid0", 00:09:21.603 "superblock": true, 00:09:21.603 "num_base_bdevs": 3, 00:09:21.603 "num_base_bdevs_discovered": 1, 00:09:21.603 "num_base_bdevs_operational": 3, 00:09:21.603 "base_bdevs_list": [ 00:09:21.603 { 00:09:21.603 "name": "pt1", 00:09:21.603 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:21.603 "is_configured": true, 00:09:21.603 "data_offset": 2048, 00:09:21.603 "data_size": 63488 00:09:21.603 }, 00:09:21.603 { 00:09:21.603 "name": null, 00:09:21.603 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:21.603 "is_configured": false, 00:09:21.603 "data_offset": 0, 00:09:21.603 "data_size": 63488 00:09:21.603 }, 00:09:21.603 { 00:09:21.603 "name": null, 00:09:21.603 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:21.603 "is_configured": false, 00:09:21.603 "data_offset": 2048, 00:09:21.603 "data_size": 63488 00:09:21.603 } 00:09:21.603 ] 00:09:21.603 }' 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.603 15:18:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.172 [2024-11-10 15:18:28.334341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:22.172 [2024-11-10 15:18:28.334493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.172 [2024-11-10 15:18:28.334530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:22.172 [2024-11-10 15:18:28.334561] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.172 [2024-11-10 15:18:28.335062] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.172 [2024-11-10 15:18:28.335129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:22.172 [2024-11-10 15:18:28.335253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:22.172 [2024-11-10 15:18:28.335331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:22.172 pt2 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.172 [2024-11-10 15:18:28.346286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:22.172 [2024-11-10 15:18:28.346374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.172 [2024-11-10 15:18:28.346403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:22.172 [2024-11-10 15:18:28.346429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.172 [2024-11-10 15:18:28.346786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.172 [2024-11-10 15:18:28.346847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:22.172 [2024-11-10 15:18:28.346925] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:22.172 [2024-11-10 15:18:28.346973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:22.172 [2024-11-10 15:18:28.347104] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:22.172 [2024-11-10 15:18:28.347144] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:22.172 [2024-11-10 15:18:28.347433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:22.172 [2024-11-10 15:18:28.347581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:22.172 [2024-11-10 15:18:28.347616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:22.172 [2024-11-10 15:18:28.347756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.172 pt3 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.172 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.172 "name": "raid_bdev1", 00:09:22.172 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:22.172 "strip_size_kb": 64, 00:09:22.172 "state": "online", 00:09:22.172 "raid_level": "raid0", 00:09:22.172 "superblock": true, 00:09:22.172 "num_base_bdevs": 3, 00:09:22.172 "num_base_bdevs_discovered": 3, 00:09:22.172 "num_base_bdevs_operational": 3, 00:09:22.172 "base_bdevs_list": [ 00:09:22.172 { 00:09:22.172 "name": "pt1", 00:09:22.172 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.172 "is_configured": true, 00:09:22.172 "data_offset": 2048, 00:09:22.172 "data_size": 63488 00:09:22.172 }, 00:09:22.172 { 00:09:22.172 "name": "pt2", 00:09:22.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.172 "is_configured": true, 00:09:22.172 "data_offset": 2048, 00:09:22.172 "data_size": 63488 00:09:22.172 }, 00:09:22.172 { 00:09:22.172 "name": "pt3", 00:09:22.172 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.173 "is_configured": true, 00:09:22.173 "data_offset": 2048, 00:09:22.173 "data_size": 63488 00:09:22.173 } 00:09:22.173 ] 00:09:22.173 }' 00:09:22.173 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.173 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.432 [2024-11-10 15:18:28.774737] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.432 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.693 "name": "raid_bdev1", 00:09:22.693 "aliases": [ 00:09:22.693 "b2654131-e096-4328-adb2-994aad4155aa" 00:09:22.693 ], 00:09:22.693 "product_name": "Raid Volume", 00:09:22.693 "block_size": 512, 00:09:22.693 "num_blocks": 190464, 00:09:22.693 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:22.693 "assigned_rate_limits": { 00:09:22.693 "rw_ios_per_sec": 0, 00:09:22.693 "rw_mbytes_per_sec": 0, 00:09:22.693 "r_mbytes_per_sec": 0, 00:09:22.693 "w_mbytes_per_sec": 0 00:09:22.693 }, 00:09:22.693 "claimed": false, 00:09:22.693 "zoned": false, 00:09:22.693 "supported_io_types": { 00:09:22.693 "read": true, 00:09:22.693 "write": true, 00:09:22.693 "unmap": true, 00:09:22.693 "flush": true, 00:09:22.693 "reset": true, 00:09:22.693 "nvme_admin": false, 00:09:22.693 "nvme_io": false, 00:09:22.693 "nvme_io_md": false, 00:09:22.693 "write_zeroes": true, 00:09:22.693 "zcopy": false, 00:09:22.693 "get_zone_info": false, 00:09:22.693 "zone_management": false, 00:09:22.693 "zone_append": false, 00:09:22.693 "compare": false, 00:09:22.693 "compare_and_write": false, 00:09:22.693 "abort": false, 00:09:22.693 "seek_hole": false, 00:09:22.693 "seek_data": false, 00:09:22.693 "copy": false, 00:09:22.693 "nvme_iov_md": false 00:09:22.693 }, 00:09:22.693 "memory_domains": [ 00:09:22.693 { 00:09:22.693 "dma_device_id": "system", 00:09:22.693 "dma_device_type": 1 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.693 "dma_device_type": 2 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "dma_device_id": "system", 00:09:22.693 "dma_device_type": 1 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.693 "dma_device_type": 2 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "dma_device_id": "system", 00:09:22.693 "dma_device_type": 1 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.693 "dma_device_type": 2 00:09:22.693 } 00:09:22.693 ], 00:09:22.693 "driver_specific": { 00:09:22.693 "raid": { 00:09:22.693 "uuid": "b2654131-e096-4328-adb2-994aad4155aa", 00:09:22.693 "strip_size_kb": 64, 00:09:22.693 "state": "online", 00:09:22.693 "raid_level": "raid0", 00:09:22.693 "superblock": true, 00:09:22.693 "num_base_bdevs": 3, 00:09:22.693 "num_base_bdevs_discovered": 3, 00:09:22.693 "num_base_bdevs_operational": 3, 00:09:22.693 "base_bdevs_list": [ 00:09:22.693 { 00:09:22.693 "name": "pt1", 00:09:22.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:22.693 "is_configured": true, 00:09:22.693 "data_offset": 2048, 00:09:22.693 "data_size": 63488 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "name": "pt2", 00:09:22.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:22.693 "is_configured": true, 00:09:22.693 "data_offset": 2048, 00:09:22.693 "data_size": 63488 00:09:22.693 }, 00:09:22.693 { 00:09:22.693 "name": "pt3", 00:09:22.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:22.693 "is_configured": true, 00:09:22.693 "data_offset": 2048, 00:09:22.693 "data_size": 63488 00:09:22.693 } 00:09:22.693 ] 00:09:22.693 } 00:09:22.693 } 00:09:22.693 }' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:22.693 pt2 00:09:22.693 pt3' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:22.693 [2024-11-10 15:18:28.966788] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.693 15:18:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.693 15:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b2654131-e096-4328-adb2-994aad4155aa '!=' b2654131-e096-4328-adb2-994aad4155aa ']' 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77620 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 77620 ']' 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 77620 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77620 00:09:22.694 killing process with pid 77620 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77620' 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 77620 00:09:22.694 [2024-11-10 15:18:29.032756] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.694 [2024-11-10 15:18:29.032885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.694 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 77620 00:09:22.694 [2024-11-10 15:18:29.032954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.694 [2024-11-10 15:18:29.032969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:22.954 [2024-11-10 15:18:29.092511] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:23.213 15:18:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:23.213 00:09:23.213 real 0m3.929s 00:09:23.213 user 0m6.040s 00:09:23.213 sys 0m0.824s 00:09:23.213 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:23.213 ************************************ 00:09:23.213 END TEST raid_superblock_test 00:09:23.213 ************************************ 00:09:23.213 15:18:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.213 15:18:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:23.213 15:18:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:23.213 15:18:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:23.213 15:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:23.213 ************************************ 00:09:23.213 START TEST raid_read_error_test 00:09:23.213 ************************************ 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 read 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:23.213 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xCs7xAuJvv 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77862 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77862 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 77862 ']' 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:23.214 15:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.476 [2024-11-10 15:18:29.577961] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:23.476 [2024-11-10 15:18:29.578189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77862 ] 00:09:23.476 [2024-11-10 15:18:29.711718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:23.476 [2024-11-10 15:18:29.747809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.476 [2024-11-10 15:18:29.786360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.735 [2024-11-10 15:18:29.862395] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.735 [2024-11-10 15:18:29.862436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 BaseBdev1_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 true 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 [2024-11-10 15:18:30.433471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:24.305 [2024-11-10 15:18:30.433541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.305 [2024-11-10 15:18:30.433564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:24.305 [2024-11-10 15:18:30.433587] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.305 [2024-11-10 15:18:30.435956] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.305 [2024-11-10 15:18:30.436095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:24.305 BaseBdev1 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 BaseBdev2_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 true 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 [2024-11-10 15:18:30.480043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:24.305 [2024-11-10 15:18:30.480096] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.305 [2024-11-10 15:18:30.480112] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:24.305 [2024-11-10 15:18:30.480124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.305 [2024-11-10 15:18:30.482448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.305 [2024-11-10 15:18:30.482559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:24.305 BaseBdev2 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.305 BaseBdev3_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:24.305 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.306 true 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.306 [2024-11-10 15:18:30.526551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:24.306 [2024-11-10 15:18:30.526609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:24.306 [2024-11-10 15:18:30.526626] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:24.306 [2024-11-10 15:18:30.526638] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:24.306 [2024-11-10 15:18:30.528957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:24.306 [2024-11-10 15:18:30.529092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:24.306 BaseBdev3 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.306 [2024-11-10 15:18:30.538618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.306 [2024-11-10 15:18:30.540740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.306 [2024-11-10 15:18:30.540870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.306 [2024-11-10 15:18:30.541070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.306 [2024-11-10 15:18:30.541084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.306 [2024-11-10 15:18:30.541353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:24.306 [2024-11-10 15:18:30.541484] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.306 [2024-11-10 15:18:30.541496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:24.306 [2024-11-10 15:18:30.541617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.306 "name": "raid_bdev1", 00:09:24.306 "uuid": "64f0dd5a-e650-4a3c-85ee-f63964ff7c5f", 00:09:24.306 "strip_size_kb": 64, 00:09:24.306 "state": "online", 00:09:24.306 "raid_level": "raid0", 00:09:24.306 "superblock": true, 00:09:24.306 "num_base_bdevs": 3, 00:09:24.306 "num_base_bdevs_discovered": 3, 00:09:24.306 "num_base_bdevs_operational": 3, 00:09:24.306 "base_bdevs_list": [ 00:09:24.306 { 00:09:24.306 "name": "BaseBdev1", 00:09:24.306 "uuid": "0f77d95b-f399-5745-8456-c04f0e096734", 00:09:24.306 "is_configured": true, 00:09:24.306 "data_offset": 2048, 00:09:24.306 "data_size": 63488 00:09:24.306 }, 00:09:24.306 { 00:09:24.306 "name": "BaseBdev2", 00:09:24.306 "uuid": "27eaecab-e3c2-54c0-b857-e90350828ad1", 00:09:24.306 "is_configured": true, 00:09:24.306 "data_offset": 2048, 00:09:24.306 "data_size": 63488 00:09:24.306 }, 00:09:24.306 { 00:09:24.306 "name": "BaseBdev3", 00:09:24.306 "uuid": "76526596-d098-5c5c-85ca-5b5ac97741d1", 00:09:24.306 "is_configured": true, 00:09:24.306 "data_offset": 2048, 00:09:24.306 "data_size": 63488 00:09:24.306 } 00:09:24.306 ] 00:09:24.306 }' 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.306 15:18:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.875 15:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:24.875 15:18:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:24.875 [2024-11-10 15:18:31.091349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:25.815 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.816 "name": "raid_bdev1", 00:09:25.816 "uuid": "64f0dd5a-e650-4a3c-85ee-f63964ff7c5f", 00:09:25.816 "strip_size_kb": 64, 00:09:25.816 "state": "online", 00:09:25.816 "raid_level": "raid0", 00:09:25.816 "superblock": true, 00:09:25.816 "num_base_bdevs": 3, 00:09:25.816 "num_base_bdevs_discovered": 3, 00:09:25.816 "num_base_bdevs_operational": 3, 00:09:25.816 "base_bdevs_list": [ 00:09:25.816 { 00:09:25.816 "name": "BaseBdev1", 00:09:25.816 "uuid": "0f77d95b-f399-5745-8456-c04f0e096734", 00:09:25.816 "is_configured": true, 00:09:25.816 "data_offset": 2048, 00:09:25.816 "data_size": 63488 00:09:25.816 }, 00:09:25.816 { 00:09:25.816 "name": "BaseBdev2", 00:09:25.816 "uuid": "27eaecab-e3c2-54c0-b857-e90350828ad1", 00:09:25.816 "is_configured": true, 00:09:25.816 "data_offset": 2048, 00:09:25.816 "data_size": 63488 00:09:25.816 }, 00:09:25.816 { 00:09:25.816 "name": "BaseBdev3", 00:09:25.816 "uuid": "76526596-d098-5c5c-85ca-5b5ac97741d1", 00:09:25.816 "is_configured": true, 00:09:25.816 "data_offset": 2048, 00:09:25.816 "data_size": 63488 00:09:25.816 } 00:09:25.816 ] 00:09:25.816 }' 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.816 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.076 [2024-11-10 15:18:32.414263] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:26.076 [2024-11-10 15:18:32.414403] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.076 [2024-11-10 15:18:32.416997] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.076 [2024-11-10 15:18:32.417061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.076 [2024-11-10 15:18:32.417105] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.076 [2024-11-10 15:18:32.417115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:26.076 { 00:09:26.076 "results": [ 00:09:26.076 { 00:09:26.076 "job": "raid_bdev1", 00:09:26.076 "core_mask": "0x1", 00:09:26.076 "workload": "randrw", 00:09:26.076 "percentage": 50, 00:09:26.076 "status": "finished", 00:09:26.076 "queue_depth": 1, 00:09:26.076 "io_size": 131072, 00:09:26.076 "runtime": 1.320736, 00:09:26.076 "iops": 14810.681317083809, 00:09:26.076 "mibps": 1851.335164635476, 00:09:26.076 "io_failed": 1, 00:09:26.076 "io_timeout": 0, 00:09:26.076 "avg_latency_us": 94.70720413061028, 00:09:26.076 "min_latency_us": 22.201690926523142, 00:09:26.076 "max_latency_us": 1406.6277346814259 00:09:26.076 } 00:09:26.076 ], 00:09:26.076 "core_count": 1 00:09:26.076 } 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77862 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 77862 ']' 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 77862 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:26.076 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77862 00:09:26.336 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:26.336 killing process with pid 77862 00:09:26.336 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:26.336 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77862' 00:09:26.336 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 77862 00:09:26.336 [2024-11-10 15:18:32.464713] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.336 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 77862 00:09:26.336 [2024-11-10 15:18:32.510545] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xCs7xAuJvv 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:26.597 ************************************ 00:09:26.597 END TEST raid_read_error_test 00:09:26.597 ************************************ 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:26.597 00:09:26.597 real 0m3.364s 00:09:26.597 user 0m4.124s 00:09:26.597 sys 0m0.593s 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:26.597 15:18:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.597 15:18:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:26.597 15:18:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:26.597 15:18:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:26.597 15:18:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.597 ************************************ 00:09:26.597 START TEST raid_write_error_test 00:09:26.597 ************************************ 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 3 write 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Fa01hzojHy 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77997 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77997 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 77997 ']' 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.597 15:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.857 [2024-11-10 15:18:33.009779] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:26.857 [2024-11-10 15:18:33.009969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77997 ] 00:09:26.857 [2024-11-10 15:18:33.148390] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:26.857 [2024-11-10 15:18:33.185258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.117 [2024-11-10 15:18:33.224699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.117 [2024-11-10 15:18:33.301157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.117 [2024-11-10 15:18:33.301357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 BaseBdev1_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 true 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 [2024-11-10 15:18:33.848127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:27.688 [2024-11-10 15:18:33.848203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.688 [2024-11-10 15:18:33.848222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:27.688 [2024-11-10 15:18:33.848237] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.688 [2024-11-10 15:18:33.850587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.688 [2024-11-10 15:18:33.850624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:27.688 BaseBdev1 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 BaseBdev2_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 true 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 [2024-11-10 15:18:33.882682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:27.688 [2024-11-10 15:18:33.882731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.688 [2024-11-10 15:18:33.882746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:27.688 [2024-11-10 15:18:33.882757] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.688 [2024-11-10 15:18:33.885098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.688 [2024-11-10 15:18:33.885131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:27.688 BaseBdev2 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 BaseBdev3_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 true 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 [2024-11-10 15:18:33.917162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:27.688 [2024-11-10 15:18:33.917211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:27.688 [2024-11-10 15:18:33.917227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:27.688 [2024-11-10 15:18:33.917239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:27.688 [2024-11-10 15:18:33.919522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:27.688 [2024-11-10 15:18:33.919558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:27.688 BaseBdev3 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 [2024-11-10 15:18:33.925233] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.688 [2024-11-10 15:18:33.927310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.688 [2024-11-10 15:18:33.927387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.688 [2024-11-10 15:18:33.927565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:27.688 [2024-11-10 15:18:33.927576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:27.688 [2024-11-10 15:18:33.927815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:27.688 [2024-11-10 15:18:33.927957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:27.688 [2024-11-10 15:18:33.927975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:27.688 [2024-11-10 15:18:33.928098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:27.688 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.688 "name": "raid_bdev1", 00:09:27.688 "uuid": "1d2eb759-e091-4e74-8cc0-559b5dd9a043", 00:09:27.688 "strip_size_kb": 64, 00:09:27.688 "state": "online", 00:09:27.688 "raid_level": "raid0", 00:09:27.688 "superblock": true, 00:09:27.688 "num_base_bdevs": 3, 00:09:27.688 "num_base_bdevs_discovered": 3, 00:09:27.688 "num_base_bdevs_operational": 3, 00:09:27.688 "base_bdevs_list": [ 00:09:27.688 { 00:09:27.688 "name": "BaseBdev1", 00:09:27.688 "uuid": "2fa9d78e-fc77-5fd1-b086-b9c972f25da7", 00:09:27.688 "is_configured": true, 00:09:27.688 "data_offset": 2048, 00:09:27.688 "data_size": 63488 00:09:27.688 }, 00:09:27.688 { 00:09:27.688 "name": "BaseBdev2", 00:09:27.689 "uuid": "52625d97-317b-5d84-adbb-dfe074926cd7", 00:09:27.689 "is_configured": true, 00:09:27.689 "data_offset": 2048, 00:09:27.689 "data_size": 63488 00:09:27.689 }, 00:09:27.689 { 00:09:27.689 "name": "BaseBdev3", 00:09:27.689 "uuid": "5407f394-03fd-5d57-a0bd-d9b9988bc684", 00:09:27.689 "is_configured": true, 00:09:27.689 "data_offset": 2048, 00:09:27.689 "data_size": 63488 00:09:27.689 } 00:09:27.689 ] 00:09:27.689 }' 00:09:27.689 15:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.689 15:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.259 15:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:28.259 15:18:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:28.259 [2024-11-10 15:18:34.449852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.199 "name": "raid_bdev1", 00:09:29.199 "uuid": "1d2eb759-e091-4e74-8cc0-559b5dd9a043", 00:09:29.199 "strip_size_kb": 64, 00:09:29.199 "state": "online", 00:09:29.199 "raid_level": "raid0", 00:09:29.199 "superblock": true, 00:09:29.199 "num_base_bdevs": 3, 00:09:29.199 "num_base_bdevs_discovered": 3, 00:09:29.199 "num_base_bdevs_operational": 3, 00:09:29.199 "base_bdevs_list": [ 00:09:29.199 { 00:09:29.199 "name": "BaseBdev1", 00:09:29.199 "uuid": "2fa9d78e-fc77-5fd1-b086-b9c972f25da7", 00:09:29.199 "is_configured": true, 00:09:29.199 "data_offset": 2048, 00:09:29.199 "data_size": 63488 00:09:29.199 }, 00:09:29.199 { 00:09:29.199 "name": "BaseBdev2", 00:09:29.199 "uuid": "52625d97-317b-5d84-adbb-dfe074926cd7", 00:09:29.199 "is_configured": true, 00:09:29.199 "data_offset": 2048, 00:09:29.199 "data_size": 63488 00:09:29.199 }, 00:09:29.199 { 00:09:29.199 "name": "BaseBdev3", 00:09:29.199 "uuid": "5407f394-03fd-5d57-a0bd-d9b9988bc684", 00:09:29.199 "is_configured": true, 00:09:29.199 "data_offset": 2048, 00:09:29.199 "data_size": 63488 00:09:29.199 } 00:09:29.199 ] 00:09:29.199 }' 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.199 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.459 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.459 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.459 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.719 [2024-11-10 15:18:35.825318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.719 [2024-11-10 15:18:35.825463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.719 [2024-11-10 15:18:35.828040] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.719 [2024-11-10 15:18:35.828106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.719 [2024-11-10 15:18:35.828150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.719 [2024-11-10 15:18:35.828161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:29.719 { 00:09:29.719 "results": [ 00:09:29.719 { 00:09:29.719 "job": "raid_bdev1", 00:09:29.719 "core_mask": "0x1", 00:09:29.719 "workload": "randrw", 00:09:29.719 "percentage": 50, 00:09:29.719 "status": "finished", 00:09:29.719 "queue_depth": 1, 00:09:29.719 "io_size": 131072, 00:09:29.719 "runtime": 1.373358, 00:09:29.719 "iops": 14449.255037652236, 00:09:29.719 "mibps": 1806.1568797065295, 00:09:29.719 "io_failed": 1, 00:09:29.719 "io_timeout": 0, 00:09:29.719 "avg_latency_us": 97.14703929742967, 00:09:29.719 "min_latency_us": 22.201690926523142, 00:09:29.719 "max_latency_us": 1399.4874923733985 00:09:29.719 } 00:09:29.719 ], 00:09:29.719 "core_count": 1 00:09:29.719 } 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77997 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 77997 ']' 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 77997 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77997 00:09:29.719 killing process with pid 77997 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77997' 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 77997 00:09:29.719 [2024-11-10 15:18:35.872296] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.719 15:18:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 77997 00:09:29.719 [2024-11-10 15:18:35.920619] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Fa01hzojHy 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:29.979 ************************************ 00:09:29.979 END TEST raid_write_error_test 00:09:29.979 ************************************ 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:29.979 00:09:29.979 real 0m3.350s 00:09:29.979 user 0m4.097s 00:09:29.979 sys 0m0.604s 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:29.979 15:18:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.979 15:18:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:29.979 15:18:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:29.979 15:18:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:29.979 15:18:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:29.979 15:18:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.979 ************************************ 00:09:29.979 START TEST raid_state_function_test 00:09:29.979 ************************************ 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 false 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:29.979 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:29.980 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:29.980 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78129 00:09:29.980 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:29.980 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78129' 00:09:29.980 Process raid pid: 78129 00:09:29.980 15:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78129 00:09:29.980 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 78129 ']' 00:09:30.240 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.240 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:30.240 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.240 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:30.240 15:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.240 [2024-11-10 15:18:36.420072] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:30.240 [2024-11-10 15:18:36.420269] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.240 [2024-11-10 15:18:36.553457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:30.240 [2024-11-10 15:18:36.570851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.506 [2024-11-10 15:18:36.613956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.506 [2024-11-10 15:18:36.689794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.506 [2024-11-10 15:18:36.689925] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.085 [2024-11-10 15:18:37.282674] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.085 [2024-11-10 15:18:37.282741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.085 [2024-11-10 15:18:37.282754] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.085 [2024-11-10 15:18:37.282761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.085 [2024-11-10 15:18:37.282777] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.085 [2024-11-10 15:18:37.282785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.085 "name": "Existed_Raid", 00:09:31.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.085 "strip_size_kb": 64, 00:09:31.085 "state": "configuring", 00:09:31.085 "raid_level": "concat", 00:09:31.085 "superblock": false, 00:09:31.085 "num_base_bdevs": 3, 00:09:31.085 "num_base_bdevs_discovered": 0, 00:09:31.085 "num_base_bdevs_operational": 3, 00:09:31.085 "base_bdevs_list": [ 00:09:31.085 { 00:09:31.085 "name": "BaseBdev1", 00:09:31.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.085 "is_configured": false, 00:09:31.085 "data_offset": 0, 00:09:31.085 "data_size": 0 00:09:31.085 }, 00:09:31.085 { 00:09:31.085 "name": "BaseBdev2", 00:09:31.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.085 "is_configured": false, 00:09:31.085 "data_offset": 0, 00:09:31.085 "data_size": 0 00:09:31.085 }, 00:09:31.085 { 00:09:31.085 "name": "BaseBdev3", 00:09:31.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.085 "is_configured": false, 00:09:31.085 "data_offset": 0, 00:09:31.085 "data_size": 0 00:09:31.085 } 00:09:31.085 ] 00:09:31.085 }' 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.085 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 [2024-11-10 15:18:37.754765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.655 [2024-11-10 15:18:37.754822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 [2024-11-10 15:18:37.762787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:31.655 [2024-11-10 15:18:37.762838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:31.655 [2024-11-10 15:18:37.762850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.655 [2024-11-10 15:18:37.762859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.655 [2024-11-10 15:18:37.762867] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.655 [2024-11-10 15:18:37.762877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 [2024-11-10 15:18:37.785877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.655 BaseBdev1 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.655 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.655 [ 00:09:31.655 { 00:09:31.655 "name": "BaseBdev1", 00:09:31.655 "aliases": [ 00:09:31.655 "904c250e-f6ad-4daa-a73e-cceadcbc18a0" 00:09:31.655 ], 00:09:31.655 "product_name": "Malloc disk", 00:09:31.655 "block_size": 512, 00:09:31.655 "num_blocks": 65536, 00:09:31.655 "uuid": "904c250e-f6ad-4daa-a73e-cceadcbc18a0", 00:09:31.655 "assigned_rate_limits": { 00:09:31.655 "rw_ios_per_sec": 0, 00:09:31.655 "rw_mbytes_per_sec": 0, 00:09:31.655 "r_mbytes_per_sec": 0, 00:09:31.655 "w_mbytes_per_sec": 0 00:09:31.655 }, 00:09:31.655 "claimed": true, 00:09:31.655 "claim_type": "exclusive_write", 00:09:31.655 "zoned": false, 00:09:31.655 "supported_io_types": { 00:09:31.655 "read": true, 00:09:31.655 "write": true, 00:09:31.655 "unmap": true, 00:09:31.655 "flush": true, 00:09:31.655 "reset": true, 00:09:31.655 "nvme_admin": false, 00:09:31.655 "nvme_io": false, 00:09:31.655 "nvme_io_md": false, 00:09:31.655 "write_zeroes": true, 00:09:31.655 "zcopy": true, 00:09:31.655 "get_zone_info": false, 00:09:31.655 "zone_management": false, 00:09:31.655 "zone_append": false, 00:09:31.655 "compare": false, 00:09:31.655 "compare_and_write": false, 00:09:31.655 "abort": true, 00:09:31.655 "seek_hole": false, 00:09:31.656 "seek_data": false, 00:09:31.656 "copy": true, 00:09:31.656 "nvme_iov_md": false 00:09:31.656 }, 00:09:31.656 "memory_domains": [ 00:09:31.656 { 00:09:31.656 "dma_device_id": "system", 00:09:31.656 "dma_device_type": 1 00:09:31.656 }, 00:09:31.656 { 00:09:31.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.656 "dma_device_type": 2 00:09:31.656 } 00:09:31.656 ], 00:09:31.656 "driver_specific": {} 00:09:31.656 } 00:09:31.656 ] 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.656 "name": "Existed_Raid", 00:09:31.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.656 "strip_size_kb": 64, 00:09:31.656 "state": "configuring", 00:09:31.656 "raid_level": "concat", 00:09:31.656 "superblock": false, 00:09:31.656 "num_base_bdevs": 3, 00:09:31.656 "num_base_bdevs_discovered": 1, 00:09:31.656 "num_base_bdevs_operational": 3, 00:09:31.656 "base_bdevs_list": [ 00:09:31.656 { 00:09:31.656 "name": "BaseBdev1", 00:09:31.656 "uuid": "904c250e-f6ad-4daa-a73e-cceadcbc18a0", 00:09:31.656 "is_configured": true, 00:09:31.656 "data_offset": 0, 00:09:31.656 "data_size": 65536 00:09:31.656 }, 00:09:31.656 { 00:09:31.656 "name": "BaseBdev2", 00:09:31.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.656 "is_configured": false, 00:09:31.656 "data_offset": 0, 00:09:31.656 "data_size": 0 00:09:31.656 }, 00:09:31.656 { 00:09:31.656 "name": "BaseBdev3", 00:09:31.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.656 "is_configured": false, 00:09:31.656 "data_offset": 0, 00:09:31.656 "data_size": 0 00:09:31.656 } 00:09:31.656 ] 00:09:31.656 }' 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.656 15:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.916 [2024-11-10 15:18:38.242109] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.916 [2024-11-10 15:18:38.242292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.916 [2024-11-10 15:18:38.250080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:31.916 [2024-11-10 15:18:38.252365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:31.916 [2024-11-10 15:18:38.252439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:31.916 [2024-11-10 15:18:38.252472] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:31.916 [2024-11-10 15:18:38.252494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.916 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.176 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.176 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.176 "name": "Existed_Raid", 00:09:32.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.176 "strip_size_kb": 64, 00:09:32.176 "state": "configuring", 00:09:32.176 "raid_level": "concat", 00:09:32.176 "superblock": false, 00:09:32.176 "num_base_bdevs": 3, 00:09:32.176 "num_base_bdevs_discovered": 1, 00:09:32.176 "num_base_bdevs_operational": 3, 00:09:32.176 "base_bdevs_list": [ 00:09:32.176 { 00:09:32.176 "name": "BaseBdev1", 00:09:32.176 "uuid": "904c250e-f6ad-4daa-a73e-cceadcbc18a0", 00:09:32.176 "is_configured": true, 00:09:32.176 "data_offset": 0, 00:09:32.176 "data_size": 65536 00:09:32.176 }, 00:09:32.176 { 00:09:32.176 "name": "BaseBdev2", 00:09:32.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.176 "is_configured": false, 00:09:32.176 "data_offset": 0, 00:09:32.176 "data_size": 0 00:09:32.176 }, 00:09:32.176 { 00:09:32.176 "name": "BaseBdev3", 00:09:32.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.176 "is_configured": false, 00:09:32.176 "data_offset": 0, 00:09:32.176 "data_size": 0 00:09:32.176 } 00:09:32.176 ] 00:09:32.176 }' 00:09:32.176 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.176 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 [2024-11-10 15:18:38.635209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.436 BaseBdev2 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 [ 00:09:32.436 { 00:09:32.436 "name": "BaseBdev2", 00:09:32.436 "aliases": [ 00:09:32.436 "54419c42-6de5-4ced-8329-291030ba5bd2" 00:09:32.436 ], 00:09:32.436 "product_name": "Malloc disk", 00:09:32.436 "block_size": 512, 00:09:32.436 "num_blocks": 65536, 00:09:32.436 "uuid": "54419c42-6de5-4ced-8329-291030ba5bd2", 00:09:32.436 "assigned_rate_limits": { 00:09:32.436 "rw_ios_per_sec": 0, 00:09:32.436 "rw_mbytes_per_sec": 0, 00:09:32.436 "r_mbytes_per_sec": 0, 00:09:32.436 "w_mbytes_per_sec": 0 00:09:32.436 }, 00:09:32.436 "claimed": true, 00:09:32.436 "claim_type": "exclusive_write", 00:09:32.436 "zoned": false, 00:09:32.436 "supported_io_types": { 00:09:32.436 "read": true, 00:09:32.436 "write": true, 00:09:32.436 "unmap": true, 00:09:32.436 "flush": true, 00:09:32.436 "reset": true, 00:09:32.436 "nvme_admin": false, 00:09:32.436 "nvme_io": false, 00:09:32.436 "nvme_io_md": false, 00:09:32.436 "write_zeroes": true, 00:09:32.436 "zcopy": true, 00:09:32.436 "get_zone_info": false, 00:09:32.436 "zone_management": false, 00:09:32.436 "zone_append": false, 00:09:32.436 "compare": false, 00:09:32.436 "compare_and_write": false, 00:09:32.436 "abort": true, 00:09:32.436 "seek_hole": false, 00:09:32.436 "seek_data": false, 00:09:32.436 "copy": true, 00:09:32.436 "nvme_iov_md": false 00:09:32.436 }, 00:09:32.436 "memory_domains": [ 00:09:32.436 { 00:09:32.436 "dma_device_id": "system", 00:09:32.436 "dma_device_type": 1 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.436 "dma_device_type": 2 00:09:32.436 } 00:09:32.436 ], 00:09:32.436 "driver_specific": {} 00:09:32.436 } 00:09:32.436 ] 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.436 "name": "Existed_Raid", 00:09:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.436 "strip_size_kb": 64, 00:09:32.436 "state": "configuring", 00:09:32.436 "raid_level": "concat", 00:09:32.436 "superblock": false, 00:09:32.436 "num_base_bdevs": 3, 00:09:32.436 "num_base_bdevs_discovered": 2, 00:09:32.436 "num_base_bdevs_operational": 3, 00:09:32.436 "base_bdevs_list": [ 00:09:32.436 { 00:09:32.436 "name": "BaseBdev1", 00:09:32.436 "uuid": "904c250e-f6ad-4daa-a73e-cceadcbc18a0", 00:09:32.436 "is_configured": true, 00:09:32.436 "data_offset": 0, 00:09:32.436 "data_size": 65536 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "name": "BaseBdev2", 00:09:32.436 "uuid": "54419c42-6de5-4ced-8329-291030ba5bd2", 00:09:32.436 "is_configured": true, 00:09:32.436 "data_offset": 0, 00:09:32.436 "data_size": 65536 00:09:32.436 }, 00:09:32.436 { 00:09:32.436 "name": "BaseBdev3", 00:09:32.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.436 "is_configured": false, 00:09:32.436 "data_offset": 0, 00:09:32.436 "data_size": 0 00:09:32.436 } 00:09:32.436 ] 00:09:32.436 }' 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.436 15:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.007 [2024-11-10 15:18:39.168139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:33.007 [2024-11-10 15:18:39.168195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:33.007 [2024-11-10 15:18:39.168204] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:33.007 [2024-11-10 15:18:39.168541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:33.007 [2024-11-10 15:18:39.168725] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:33.007 [2024-11-10 15:18:39.168749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:33.007 [2024-11-10 15:18:39.168964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.007 BaseBdev3 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.007 [ 00:09:33.007 { 00:09:33.007 "name": "BaseBdev3", 00:09:33.007 "aliases": [ 00:09:33.007 "b8e5af51-30bb-43fb-af3c-e571089b9686" 00:09:33.007 ], 00:09:33.007 "product_name": "Malloc disk", 00:09:33.007 "block_size": 512, 00:09:33.007 "num_blocks": 65536, 00:09:33.007 "uuid": "b8e5af51-30bb-43fb-af3c-e571089b9686", 00:09:33.007 "assigned_rate_limits": { 00:09:33.007 "rw_ios_per_sec": 0, 00:09:33.007 "rw_mbytes_per_sec": 0, 00:09:33.007 "r_mbytes_per_sec": 0, 00:09:33.007 "w_mbytes_per_sec": 0 00:09:33.007 }, 00:09:33.007 "claimed": true, 00:09:33.007 "claim_type": "exclusive_write", 00:09:33.007 "zoned": false, 00:09:33.007 "supported_io_types": { 00:09:33.007 "read": true, 00:09:33.007 "write": true, 00:09:33.007 "unmap": true, 00:09:33.007 "flush": true, 00:09:33.007 "reset": true, 00:09:33.007 "nvme_admin": false, 00:09:33.007 "nvme_io": false, 00:09:33.007 "nvme_io_md": false, 00:09:33.007 "write_zeroes": true, 00:09:33.007 "zcopy": true, 00:09:33.007 "get_zone_info": false, 00:09:33.007 "zone_management": false, 00:09:33.007 "zone_append": false, 00:09:33.007 "compare": false, 00:09:33.007 "compare_and_write": false, 00:09:33.007 "abort": true, 00:09:33.007 "seek_hole": false, 00:09:33.007 "seek_data": false, 00:09:33.007 "copy": true, 00:09:33.007 "nvme_iov_md": false 00:09:33.007 }, 00:09:33.007 "memory_domains": [ 00:09:33.007 { 00:09:33.007 "dma_device_id": "system", 00:09:33.007 "dma_device_type": 1 00:09:33.007 }, 00:09:33.007 { 00:09:33.007 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.007 "dma_device_type": 2 00:09:33.007 } 00:09:33.007 ], 00:09:33.007 "driver_specific": {} 00:09:33.007 } 00:09:33.007 ] 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.007 "name": "Existed_Raid", 00:09:33.007 "uuid": "66774ee4-8880-48ae-99d0-40bd3ab19459", 00:09:33.007 "strip_size_kb": 64, 00:09:33.007 "state": "online", 00:09:33.007 "raid_level": "concat", 00:09:33.007 "superblock": false, 00:09:33.007 "num_base_bdevs": 3, 00:09:33.007 "num_base_bdevs_discovered": 3, 00:09:33.007 "num_base_bdevs_operational": 3, 00:09:33.007 "base_bdevs_list": [ 00:09:33.007 { 00:09:33.007 "name": "BaseBdev1", 00:09:33.007 "uuid": "904c250e-f6ad-4daa-a73e-cceadcbc18a0", 00:09:33.007 "is_configured": true, 00:09:33.007 "data_offset": 0, 00:09:33.007 "data_size": 65536 00:09:33.007 }, 00:09:33.007 { 00:09:33.007 "name": "BaseBdev2", 00:09:33.007 "uuid": "54419c42-6de5-4ced-8329-291030ba5bd2", 00:09:33.007 "is_configured": true, 00:09:33.007 "data_offset": 0, 00:09:33.007 "data_size": 65536 00:09:33.007 }, 00:09:33.007 { 00:09:33.007 "name": "BaseBdev3", 00:09:33.007 "uuid": "b8e5af51-30bb-43fb-af3c-e571089b9686", 00:09:33.007 "is_configured": true, 00:09:33.007 "data_offset": 0, 00:09:33.007 "data_size": 65536 00:09:33.007 } 00:09:33.007 ] 00:09:33.007 }' 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.007 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.267 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.526 [2024-11-10 15:18:39.632569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.526 "name": "Existed_Raid", 00:09:33.526 "aliases": [ 00:09:33.526 "66774ee4-8880-48ae-99d0-40bd3ab19459" 00:09:33.526 ], 00:09:33.526 "product_name": "Raid Volume", 00:09:33.526 "block_size": 512, 00:09:33.526 "num_blocks": 196608, 00:09:33.526 "uuid": "66774ee4-8880-48ae-99d0-40bd3ab19459", 00:09:33.526 "assigned_rate_limits": { 00:09:33.526 "rw_ios_per_sec": 0, 00:09:33.526 "rw_mbytes_per_sec": 0, 00:09:33.526 "r_mbytes_per_sec": 0, 00:09:33.526 "w_mbytes_per_sec": 0 00:09:33.526 }, 00:09:33.526 "claimed": false, 00:09:33.526 "zoned": false, 00:09:33.526 "supported_io_types": { 00:09:33.526 "read": true, 00:09:33.526 "write": true, 00:09:33.526 "unmap": true, 00:09:33.526 "flush": true, 00:09:33.526 "reset": true, 00:09:33.526 "nvme_admin": false, 00:09:33.526 "nvme_io": false, 00:09:33.526 "nvme_io_md": false, 00:09:33.526 "write_zeroes": true, 00:09:33.526 "zcopy": false, 00:09:33.526 "get_zone_info": false, 00:09:33.526 "zone_management": false, 00:09:33.526 "zone_append": false, 00:09:33.526 "compare": false, 00:09:33.526 "compare_and_write": false, 00:09:33.526 "abort": false, 00:09:33.526 "seek_hole": false, 00:09:33.526 "seek_data": false, 00:09:33.526 "copy": false, 00:09:33.526 "nvme_iov_md": false 00:09:33.526 }, 00:09:33.526 "memory_domains": [ 00:09:33.526 { 00:09:33.526 "dma_device_id": "system", 00:09:33.526 "dma_device_type": 1 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.526 "dma_device_type": 2 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "dma_device_id": "system", 00:09:33.526 "dma_device_type": 1 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.526 "dma_device_type": 2 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "dma_device_id": "system", 00:09:33.526 "dma_device_type": 1 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.526 "dma_device_type": 2 00:09:33.526 } 00:09:33.526 ], 00:09:33.526 "driver_specific": { 00:09:33.526 "raid": { 00:09:33.526 "uuid": "66774ee4-8880-48ae-99d0-40bd3ab19459", 00:09:33.526 "strip_size_kb": 64, 00:09:33.526 "state": "online", 00:09:33.526 "raid_level": "concat", 00:09:33.526 "superblock": false, 00:09:33.526 "num_base_bdevs": 3, 00:09:33.526 "num_base_bdevs_discovered": 3, 00:09:33.526 "num_base_bdevs_operational": 3, 00:09:33.526 "base_bdevs_list": [ 00:09:33.526 { 00:09:33.526 "name": "BaseBdev1", 00:09:33.526 "uuid": "904c250e-f6ad-4daa-a73e-cceadcbc18a0", 00:09:33.526 "is_configured": true, 00:09:33.526 "data_offset": 0, 00:09:33.526 "data_size": 65536 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "name": "BaseBdev2", 00:09:33.526 "uuid": "54419c42-6de5-4ced-8329-291030ba5bd2", 00:09:33.526 "is_configured": true, 00:09:33.526 "data_offset": 0, 00:09:33.526 "data_size": 65536 00:09:33.526 }, 00:09:33.526 { 00:09:33.526 "name": "BaseBdev3", 00:09:33.526 "uuid": "b8e5af51-30bb-43fb-af3c-e571089b9686", 00:09:33.526 "is_configured": true, 00:09:33.526 "data_offset": 0, 00:09:33.526 "data_size": 65536 00:09:33.526 } 00:09:33.526 ] 00:09:33.526 } 00:09:33.526 } 00:09:33.526 }' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:33.526 BaseBdev2 00:09:33.526 BaseBdev3' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.526 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.785 [2024-11-10 15:18:39.888405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:33.785 [2024-11-10 15:18:39.888439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.785 [2024-11-10 15:18:39.888500] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.785 "name": "Existed_Raid", 00:09:33.785 "uuid": "66774ee4-8880-48ae-99d0-40bd3ab19459", 00:09:33.785 "strip_size_kb": 64, 00:09:33.785 "state": "offline", 00:09:33.785 "raid_level": "concat", 00:09:33.785 "superblock": false, 00:09:33.785 "num_base_bdevs": 3, 00:09:33.785 "num_base_bdevs_discovered": 2, 00:09:33.785 "num_base_bdevs_operational": 2, 00:09:33.785 "base_bdevs_list": [ 00:09:33.785 { 00:09:33.785 "name": null, 00:09:33.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.785 "is_configured": false, 00:09:33.785 "data_offset": 0, 00:09:33.785 "data_size": 65536 00:09:33.785 }, 00:09:33.785 { 00:09:33.785 "name": "BaseBdev2", 00:09:33.785 "uuid": "54419c42-6de5-4ced-8329-291030ba5bd2", 00:09:33.785 "is_configured": true, 00:09:33.785 "data_offset": 0, 00:09:33.785 "data_size": 65536 00:09:33.785 }, 00:09:33.785 { 00:09:33.785 "name": "BaseBdev3", 00:09:33.785 "uuid": "b8e5af51-30bb-43fb-af3c-e571089b9686", 00:09:33.785 "is_configured": true, 00:09:33.785 "data_offset": 0, 00:09:33.785 "data_size": 65536 00:09:33.785 } 00:09:33.785 ] 00:09:33.785 }' 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.785 15:18:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.045 [2024-11-10 15:18:40.376930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.045 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.305 [2024-11-10 15:18:40.457166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.305 [2024-11-10 15:18:40.457232] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.305 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.305 BaseBdev2 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 [ 00:09:34.306 { 00:09:34.306 "name": "BaseBdev2", 00:09:34.306 "aliases": [ 00:09:34.306 "06b0508c-0845-4b5e-9ca3-ec5ff7070245" 00:09:34.306 ], 00:09:34.306 "product_name": "Malloc disk", 00:09:34.306 "block_size": 512, 00:09:34.306 "num_blocks": 65536, 00:09:34.306 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:34.306 "assigned_rate_limits": { 00:09:34.306 "rw_ios_per_sec": 0, 00:09:34.306 "rw_mbytes_per_sec": 0, 00:09:34.306 "r_mbytes_per_sec": 0, 00:09:34.306 "w_mbytes_per_sec": 0 00:09:34.306 }, 00:09:34.306 "claimed": false, 00:09:34.306 "zoned": false, 00:09:34.306 "supported_io_types": { 00:09:34.306 "read": true, 00:09:34.306 "write": true, 00:09:34.306 "unmap": true, 00:09:34.306 "flush": true, 00:09:34.306 "reset": true, 00:09:34.306 "nvme_admin": false, 00:09:34.306 "nvme_io": false, 00:09:34.306 "nvme_io_md": false, 00:09:34.306 "write_zeroes": true, 00:09:34.306 "zcopy": true, 00:09:34.306 "get_zone_info": false, 00:09:34.306 "zone_management": false, 00:09:34.306 "zone_append": false, 00:09:34.306 "compare": false, 00:09:34.306 "compare_and_write": false, 00:09:34.306 "abort": true, 00:09:34.306 "seek_hole": false, 00:09:34.306 "seek_data": false, 00:09:34.306 "copy": true, 00:09:34.306 "nvme_iov_md": false 00:09:34.306 }, 00:09:34.306 "memory_domains": [ 00:09:34.306 { 00:09:34.306 "dma_device_id": "system", 00:09:34.306 "dma_device_type": 1 00:09:34.306 }, 00:09:34.306 { 00:09:34.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.306 "dma_device_type": 2 00:09:34.306 } 00:09:34.306 ], 00:09:34.306 "driver_specific": {} 00:09:34.306 } 00:09:34.306 ] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 BaseBdev3 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 [ 00:09:34.306 { 00:09:34.306 "name": "BaseBdev3", 00:09:34.306 "aliases": [ 00:09:34.306 "0e33773d-fbeb-43fa-8162-ace9c4618307" 00:09:34.306 ], 00:09:34.306 "product_name": "Malloc disk", 00:09:34.306 "block_size": 512, 00:09:34.306 "num_blocks": 65536, 00:09:34.306 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:34.306 "assigned_rate_limits": { 00:09:34.306 "rw_ios_per_sec": 0, 00:09:34.306 "rw_mbytes_per_sec": 0, 00:09:34.306 "r_mbytes_per_sec": 0, 00:09:34.306 "w_mbytes_per_sec": 0 00:09:34.306 }, 00:09:34.306 "claimed": false, 00:09:34.306 "zoned": false, 00:09:34.306 "supported_io_types": { 00:09:34.306 "read": true, 00:09:34.306 "write": true, 00:09:34.306 "unmap": true, 00:09:34.306 "flush": true, 00:09:34.306 "reset": true, 00:09:34.306 "nvme_admin": false, 00:09:34.306 "nvme_io": false, 00:09:34.306 "nvme_io_md": false, 00:09:34.306 "write_zeroes": true, 00:09:34.306 "zcopy": true, 00:09:34.306 "get_zone_info": false, 00:09:34.306 "zone_management": false, 00:09:34.306 "zone_append": false, 00:09:34.306 "compare": false, 00:09:34.306 "compare_and_write": false, 00:09:34.306 "abort": true, 00:09:34.306 "seek_hole": false, 00:09:34.306 "seek_data": false, 00:09:34.306 "copy": true, 00:09:34.306 "nvme_iov_md": false 00:09:34.306 }, 00:09:34.306 "memory_domains": [ 00:09:34.306 { 00:09:34.306 "dma_device_id": "system", 00:09:34.306 "dma_device_type": 1 00:09:34.306 }, 00:09:34.306 { 00:09:34.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.306 "dma_device_type": 2 00:09:34.306 } 00:09:34.306 ], 00:09:34.306 "driver_specific": {} 00:09:34.306 } 00:09:34.306 ] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.306 [2024-11-10 15:18:40.647388] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.306 [2024-11-10 15:18:40.647449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.306 [2024-11-10 15:18:40.647473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.306 [2024-11-10 15:18:40.649777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.306 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.566 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.566 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.566 "name": "Existed_Raid", 00:09:34.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.566 "strip_size_kb": 64, 00:09:34.566 "state": "configuring", 00:09:34.566 "raid_level": "concat", 00:09:34.566 "superblock": false, 00:09:34.566 "num_base_bdevs": 3, 00:09:34.566 "num_base_bdevs_discovered": 2, 00:09:34.566 "num_base_bdevs_operational": 3, 00:09:34.566 "base_bdevs_list": [ 00:09:34.566 { 00:09:34.566 "name": "BaseBdev1", 00:09:34.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.566 "is_configured": false, 00:09:34.566 "data_offset": 0, 00:09:34.566 "data_size": 0 00:09:34.566 }, 00:09:34.566 { 00:09:34.566 "name": "BaseBdev2", 00:09:34.566 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:34.566 "is_configured": true, 00:09:34.566 "data_offset": 0, 00:09:34.566 "data_size": 65536 00:09:34.566 }, 00:09:34.566 { 00:09:34.566 "name": "BaseBdev3", 00:09:34.566 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:34.566 "is_configured": true, 00:09:34.566 "data_offset": 0, 00:09:34.566 "data_size": 65536 00:09:34.566 } 00:09:34.566 ] 00:09:34.566 }' 00:09:34.566 15:18:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.566 15:18:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.826 [2024-11-10 15:18:41.027547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.826 "name": "Existed_Raid", 00:09:34.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.826 "strip_size_kb": 64, 00:09:34.826 "state": "configuring", 00:09:34.826 "raid_level": "concat", 00:09:34.826 "superblock": false, 00:09:34.826 "num_base_bdevs": 3, 00:09:34.826 "num_base_bdevs_discovered": 1, 00:09:34.826 "num_base_bdevs_operational": 3, 00:09:34.826 "base_bdevs_list": [ 00:09:34.826 { 00:09:34.826 "name": "BaseBdev1", 00:09:34.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.826 "is_configured": false, 00:09:34.826 "data_offset": 0, 00:09:34.826 "data_size": 0 00:09:34.826 }, 00:09:34.826 { 00:09:34.826 "name": null, 00:09:34.826 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:34.826 "is_configured": false, 00:09:34.826 "data_offset": 0, 00:09:34.826 "data_size": 65536 00:09:34.826 }, 00:09:34.826 { 00:09:34.826 "name": "BaseBdev3", 00:09:34.826 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:34.826 "is_configured": true, 00:09:34.826 "data_offset": 0, 00:09:34.826 "data_size": 65536 00:09:34.826 } 00:09:34.826 ] 00:09:34.826 }' 00:09:34.826 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.827 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.086 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:35.086 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.086 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.086 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.346 [2024-11-10 15:18:41.488626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.346 BaseBdev1 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.346 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.346 [ 00:09:35.346 { 00:09:35.346 "name": "BaseBdev1", 00:09:35.346 "aliases": [ 00:09:35.346 "6d6bdd42-3a07-4c17-b3fe-0012044c58e6" 00:09:35.346 ], 00:09:35.346 "product_name": "Malloc disk", 00:09:35.346 "block_size": 512, 00:09:35.346 "num_blocks": 65536, 00:09:35.346 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:35.346 "assigned_rate_limits": { 00:09:35.346 "rw_ios_per_sec": 0, 00:09:35.346 "rw_mbytes_per_sec": 0, 00:09:35.346 "r_mbytes_per_sec": 0, 00:09:35.347 "w_mbytes_per_sec": 0 00:09:35.347 }, 00:09:35.347 "claimed": true, 00:09:35.347 "claim_type": "exclusive_write", 00:09:35.347 "zoned": false, 00:09:35.347 "supported_io_types": { 00:09:35.347 "read": true, 00:09:35.347 "write": true, 00:09:35.347 "unmap": true, 00:09:35.347 "flush": true, 00:09:35.347 "reset": true, 00:09:35.347 "nvme_admin": false, 00:09:35.347 "nvme_io": false, 00:09:35.347 "nvme_io_md": false, 00:09:35.347 "write_zeroes": true, 00:09:35.347 "zcopy": true, 00:09:35.347 "get_zone_info": false, 00:09:35.347 "zone_management": false, 00:09:35.347 "zone_append": false, 00:09:35.347 "compare": false, 00:09:35.347 "compare_and_write": false, 00:09:35.347 "abort": true, 00:09:35.347 "seek_hole": false, 00:09:35.347 "seek_data": false, 00:09:35.347 "copy": true, 00:09:35.347 "nvme_iov_md": false 00:09:35.347 }, 00:09:35.347 "memory_domains": [ 00:09:35.347 { 00:09:35.347 "dma_device_id": "system", 00:09:35.347 "dma_device_type": 1 00:09:35.347 }, 00:09:35.347 { 00:09:35.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.347 "dma_device_type": 2 00:09:35.347 } 00:09:35.347 ], 00:09:35.347 "driver_specific": {} 00:09:35.347 } 00:09:35.347 ] 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.347 "name": "Existed_Raid", 00:09:35.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.347 "strip_size_kb": 64, 00:09:35.347 "state": "configuring", 00:09:35.347 "raid_level": "concat", 00:09:35.347 "superblock": false, 00:09:35.347 "num_base_bdevs": 3, 00:09:35.347 "num_base_bdevs_discovered": 2, 00:09:35.347 "num_base_bdevs_operational": 3, 00:09:35.347 "base_bdevs_list": [ 00:09:35.347 { 00:09:35.347 "name": "BaseBdev1", 00:09:35.347 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:35.347 "is_configured": true, 00:09:35.347 "data_offset": 0, 00:09:35.347 "data_size": 65536 00:09:35.347 }, 00:09:35.347 { 00:09:35.347 "name": null, 00:09:35.347 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:35.347 "is_configured": false, 00:09:35.347 "data_offset": 0, 00:09:35.347 "data_size": 65536 00:09:35.347 }, 00:09:35.347 { 00:09:35.347 "name": "BaseBdev3", 00:09:35.347 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:35.347 "is_configured": true, 00:09:35.347 "data_offset": 0, 00:09:35.347 "data_size": 65536 00:09:35.347 } 00:09:35.347 ] 00:09:35.347 }' 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.347 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.607 [2024-11-10 15:18:41.924807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.607 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.867 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.867 "name": "Existed_Raid", 00:09:35.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.867 "strip_size_kb": 64, 00:09:35.867 "state": "configuring", 00:09:35.867 "raid_level": "concat", 00:09:35.867 "superblock": false, 00:09:35.867 "num_base_bdevs": 3, 00:09:35.867 "num_base_bdevs_discovered": 1, 00:09:35.867 "num_base_bdevs_operational": 3, 00:09:35.867 "base_bdevs_list": [ 00:09:35.867 { 00:09:35.867 "name": "BaseBdev1", 00:09:35.867 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:35.867 "is_configured": true, 00:09:35.867 "data_offset": 0, 00:09:35.867 "data_size": 65536 00:09:35.867 }, 00:09:35.867 { 00:09:35.867 "name": null, 00:09:35.867 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:35.867 "is_configured": false, 00:09:35.867 "data_offset": 0, 00:09:35.867 "data_size": 65536 00:09:35.867 }, 00:09:35.867 { 00:09:35.867 "name": null, 00:09:35.867 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:35.867 "is_configured": false, 00:09:35.867 "data_offset": 0, 00:09:35.867 "data_size": 65536 00:09:35.867 } 00:09:35.867 ] 00:09:35.867 }' 00:09:35.867 15:18:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.867 15:18:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 [2024-11-10 15:18:42.389002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.127 "name": "Existed_Raid", 00:09:36.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.127 "strip_size_kb": 64, 00:09:36.127 "state": "configuring", 00:09:36.127 "raid_level": "concat", 00:09:36.127 "superblock": false, 00:09:36.127 "num_base_bdevs": 3, 00:09:36.127 "num_base_bdevs_discovered": 2, 00:09:36.127 "num_base_bdevs_operational": 3, 00:09:36.127 "base_bdevs_list": [ 00:09:36.127 { 00:09:36.127 "name": "BaseBdev1", 00:09:36.127 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:36.127 "is_configured": true, 00:09:36.127 "data_offset": 0, 00:09:36.127 "data_size": 65536 00:09:36.127 }, 00:09:36.127 { 00:09:36.127 "name": null, 00:09:36.127 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:36.127 "is_configured": false, 00:09:36.127 "data_offset": 0, 00:09:36.127 "data_size": 65536 00:09:36.127 }, 00:09:36.127 { 00:09:36.127 "name": "BaseBdev3", 00:09:36.127 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:36.127 "is_configured": true, 00:09:36.127 "data_offset": 0, 00:09:36.127 "data_size": 65536 00:09:36.127 } 00:09:36.127 ] 00:09:36.127 }' 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.127 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 [2024-11-10 15:18:42.865142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.696 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.696 "name": "Existed_Raid", 00:09:36.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.697 "strip_size_kb": 64, 00:09:36.697 "state": "configuring", 00:09:36.697 "raid_level": "concat", 00:09:36.697 "superblock": false, 00:09:36.697 "num_base_bdevs": 3, 00:09:36.697 "num_base_bdevs_discovered": 1, 00:09:36.697 "num_base_bdevs_operational": 3, 00:09:36.697 "base_bdevs_list": [ 00:09:36.697 { 00:09:36.697 "name": null, 00:09:36.697 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:36.697 "is_configured": false, 00:09:36.697 "data_offset": 0, 00:09:36.697 "data_size": 65536 00:09:36.697 }, 00:09:36.697 { 00:09:36.697 "name": null, 00:09:36.697 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:36.697 "is_configured": false, 00:09:36.697 "data_offset": 0, 00:09:36.697 "data_size": 65536 00:09:36.697 }, 00:09:36.697 { 00:09:36.697 "name": "BaseBdev3", 00:09:36.697 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:36.697 "is_configured": true, 00:09:36.697 "data_offset": 0, 00:09:36.697 "data_size": 65536 00:09:36.697 } 00:09:36.697 ] 00:09:36.697 }' 00:09:36.697 15:18:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.697 15:18:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.956 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.956 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.956 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.956 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.956 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.216 [2024-11-10 15:18:43.328999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.216 "name": "Existed_Raid", 00:09:37.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.216 "strip_size_kb": 64, 00:09:37.216 "state": "configuring", 00:09:37.216 "raid_level": "concat", 00:09:37.216 "superblock": false, 00:09:37.216 "num_base_bdevs": 3, 00:09:37.216 "num_base_bdevs_discovered": 2, 00:09:37.216 "num_base_bdevs_operational": 3, 00:09:37.216 "base_bdevs_list": [ 00:09:37.216 { 00:09:37.216 "name": null, 00:09:37.216 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:37.216 "is_configured": false, 00:09:37.216 "data_offset": 0, 00:09:37.216 "data_size": 65536 00:09:37.216 }, 00:09:37.216 { 00:09:37.216 "name": "BaseBdev2", 00:09:37.216 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:37.216 "is_configured": true, 00:09:37.216 "data_offset": 0, 00:09:37.216 "data_size": 65536 00:09:37.216 }, 00:09:37.216 { 00:09:37.216 "name": "BaseBdev3", 00:09:37.216 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:37.216 "is_configured": true, 00:09:37.216 "data_offset": 0, 00:09:37.216 "data_size": 65536 00:09:37.216 } 00:09:37.216 ] 00:09:37.216 }' 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.216 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6d6bdd42-3a07-4c17-b3fe-0012044c58e6 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.476 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.736 [2024-11-10 15:18:43.845916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:37.736 [2024-11-10 15:18:43.845975] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:37.736 [2024-11-10 15:18:43.845983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:37.736 [2024-11-10 15:18:43.846285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:37.736 [2024-11-10 15:18:43.846422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:37.736 [2024-11-10 15:18:43.846441] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:37.736 [2024-11-10 15:18:43.846635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:37.736 NewBaseBdev 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.736 [ 00:09:37.736 { 00:09:37.736 "name": "NewBaseBdev", 00:09:37.736 "aliases": [ 00:09:37.736 "6d6bdd42-3a07-4c17-b3fe-0012044c58e6" 00:09:37.736 ], 00:09:37.736 "product_name": "Malloc disk", 00:09:37.736 "block_size": 512, 00:09:37.736 "num_blocks": 65536, 00:09:37.736 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:37.736 "assigned_rate_limits": { 00:09:37.736 "rw_ios_per_sec": 0, 00:09:37.736 "rw_mbytes_per_sec": 0, 00:09:37.736 "r_mbytes_per_sec": 0, 00:09:37.736 "w_mbytes_per_sec": 0 00:09:37.736 }, 00:09:37.736 "claimed": true, 00:09:37.736 "claim_type": "exclusive_write", 00:09:37.736 "zoned": false, 00:09:37.736 "supported_io_types": { 00:09:37.736 "read": true, 00:09:37.736 "write": true, 00:09:37.736 "unmap": true, 00:09:37.736 "flush": true, 00:09:37.736 "reset": true, 00:09:37.736 "nvme_admin": false, 00:09:37.736 "nvme_io": false, 00:09:37.736 "nvme_io_md": false, 00:09:37.736 "write_zeroes": true, 00:09:37.736 "zcopy": true, 00:09:37.736 "get_zone_info": false, 00:09:37.736 "zone_management": false, 00:09:37.736 "zone_append": false, 00:09:37.736 "compare": false, 00:09:37.736 "compare_and_write": false, 00:09:37.736 "abort": true, 00:09:37.736 "seek_hole": false, 00:09:37.736 "seek_data": false, 00:09:37.736 "copy": true, 00:09:37.736 "nvme_iov_md": false 00:09:37.736 }, 00:09:37.736 "memory_domains": [ 00:09:37.736 { 00:09:37.736 "dma_device_id": "system", 00:09:37.736 "dma_device_type": 1 00:09:37.736 }, 00:09:37.736 { 00:09:37.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.736 "dma_device_type": 2 00:09:37.736 } 00:09:37.736 ], 00:09:37.736 "driver_specific": {} 00:09:37.736 } 00:09:37.736 ] 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.736 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.736 "name": "Existed_Raid", 00:09:37.736 "uuid": "be60ec60-cd62-4336-a7a0-bc5593ffedd7", 00:09:37.736 "strip_size_kb": 64, 00:09:37.736 "state": "online", 00:09:37.736 "raid_level": "concat", 00:09:37.736 "superblock": false, 00:09:37.736 "num_base_bdevs": 3, 00:09:37.736 "num_base_bdevs_discovered": 3, 00:09:37.736 "num_base_bdevs_operational": 3, 00:09:37.736 "base_bdevs_list": [ 00:09:37.736 { 00:09:37.736 "name": "NewBaseBdev", 00:09:37.736 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:37.736 "is_configured": true, 00:09:37.736 "data_offset": 0, 00:09:37.736 "data_size": 65536 00:09:37.736 }, 00:09:37.736 { 00:09:37.736 "name": "BaseBdev2", 00:09:37.737 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:37.737 "is_configured": true, 00:09:37.737 "data_offset": 0, 00:09:37.737 "data_size": 65536 00:09:37.737 }, 00:09:37.737 { 00:09:37.737 "name": "BaseBdev3", 00:09:37.737 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:37.737 "is_configured": true, 00:09:37.737 "data_offset": 0, 00:09:37.737 "data_size": 65536 00:09:37.737 } 00:09:37.737 ] 00:09:37.737 }' 00:09:37.737 15:18:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.737 15:18:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.996 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.996 [2024-11-10 15:18:44.338476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.256 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.256 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.256 "name": "Existed_Raid", 00:09:38.256 "aliases": [ 00:09:38.256 "be60ec60-cd62-4336-a7a0-bc5593ffedd7" 00:09:38.256 ], 00:09:38.256 "product_name": "Raid Volume", 00:09:38.256 "block_size": 512, 00:09:38.256 "num_blocks": 196608, 00:09:38.256 "uuid": "be60ec60-cd62-4336-a7a0-bc5593ffedd7", 00:09:38.256 "assigned_rate_limits": { 00:09:38.256 "rw_ios_per_sec": 0, 00:09:38.256 "rw_mbytes_per_sec": 0, 00:09:38.256 "r_mbytes_per_sec": 0, 00:09:38.256 "w_mbytes_per_sec": 0 00:09:38.256 }, 00:09:38.256 "claimed": false, 00:09:38.256 "zoned": false, 00:09:38.256 "supported_io_types": { 00:09:38.256 "read": true, 00:09:38.256 "write": true, 00:09:38.256 "unmap": true, 00:09:38.256 "flush": true, 00:09:38.256 "reset": true, 00:09:38.256 "nvme_admin": false, 00:09:38.256 "nvme_io": false, 00:09:38.256 "nvme_io_md": false, 00:09:38.256 "write_zeroes": true, 00:09:38.256 "zcopy": false, 00:09:38.256 "get_zone_info": false, 00:09:38.256 "zone_management": false, 00:09:38.256 "zone_append": false, 00:09:38.256 "compare": false, 00:09:38.256 "compare_and_write": false, 00:09:38.256 "abort": false, 00:09:38.256 "seek_hole": false, 00:09:38.256 "seek_data": false, 00:09:38.256 "copy": false, 00:09:38.256 "nvme_iov_md": false 00:09:38.256 }, 00:09:38.256 "memory_domains": [ 00:09:38.256 { 00:09:38.256 "dma_device_id": "system", 00:09:38.256 "dma_device_type": 1 00:09:38.256 }, 00:09:38.256 { 00:09:38.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.256 "dma_device_type": 2 00:09:38.256 }, 00:09:38.256 { 00:09:38.256 "dma_device_id": "system", 00:09:38.256 "dma_device_type": 1 00:09:38.256 }, 00:09:38.256 { 00:09:38.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.256 "dma_device_type": 2 00:09:38.256 }, 00:09:38.256 { 00:09:38.256 "dma_device_id": "system", 00:09:38.256 "dma_device_type": 1 00:09:38.256 }, 00:09:38.256 { 00:09:38.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.256 "dma_device_type": 2 00:09:38.256 } 00:09:38.256 ], 00:09:38.256 "driver_specific": { 00:09:38.256 "raid": { 00:09:38.256 "uuid": "be60ec60-cd62-4336-a7a0-bc5593ffedd7", 00:09:38.256 "strip_size_kb": 64, 00:09:38.256 "state": "online", 00:09:38.256 "raid_level": "concat", 00:09:38.256 "superblock": false, 00:09:38.256 "num_base_bdevs": 3, 00:09:38.256 "num_base_bdevs_discovered": 3, 00:09:38.256 "num_base_bdevs_operational": 3, 00:09:38.256 "base_bdevs_list": [ 00:09:38.256 { 00:09:38.256 "name": "NewBaseBdev", 00:09:38.256 "uuid": "6d6bdd42-3a07-4c17-b3fe-0012044c58e6", 00:09:38.256 "is_configured": true, 00:09:38.256 "data_offset": 0, 00:09:38.256 "data_size": 65536 00:09:38.256 }, 00:09:38.256 { 00:09:38.256 "name": "BaseBdev2", 00:09:38.256 "uuid": "06b0508c-0845-4b5e-9ca3-ec5ff7070245", 00:09:38.256 "is_configured": true, 00:09:38.256 "data_offset": 0, 00:09:38.257 "data_size": 65536 00:09:38.257 }, 00:09:38.257 { 00:09:38.257 "name": "BaseBdev3", 00:09:38.257 "uuid": "0e33773d-fbeb-43fa-8162-ace9c4618307", 00:09:38.257 "is_configured": true, 00:09:38.257 "data_offset": 0, 00:09:38.257 "data_size": 65536 00:09:38.257 } 00:09:38.257 ] 00:09:38.257 } 00:09:38.257 } 00:09:38.257 }' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:38.257 BaseBdev2 00:09:38.257 BaseBdev3' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.257 [2024-11-10 15:18:44.598204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:38.257 [2024-11-10 15:18:44.598257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:38.257 [2024-11-10 15:18:44.598353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:38.257 [2024-11-10 15:18:44.598418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:38.257 [2024-11-10 15:18:44.598429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78129 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 78129 ']' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 78129 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:38.257 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78129 00:09:38.517 killing process with pid 78129 00:09:38.517 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:38.517 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:38.517 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78129' 00:09:38.517 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 78129 00:09:38.517 [2024-11-10 15:18:44.646595] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.517 15:18:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 78129 00:09:38.517 [2024-11-10 15:18:44.703904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.777 00:09:38.777 real 0m8.704s 00:09:38.777 user 0m14.645s 00:09:38.777 sys 0m1.775s 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:38.777 ************************************ 00:09:38.777 END TEST raid_state_function_test 00:09:38.777 ************************************ 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.777 15:18:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:38.777 15:18:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:38.777 15:18:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:38.777 15:18:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.777 ************************************ 00:09:38.777 START TEST raid_state_function_test_sb 00:09:38.777 ************************************ 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 3 true 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78734 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78734' 00:09:38.777 Process raid pid: 78734 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78734 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 78734 ']' 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:38.777 15:18:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.042 [2024-11-10 15:18:45.191899] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:39.042 [2024-11-10 15:18:45.192125] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.042 [2024-11-10 15:18:45.326572] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:39.042 [2024-11-10 15:18:45.364623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.323 [2024-11-10 15:18:45.404037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.323 [2024-11-10 15:18:45.480086] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.323 [2024-11-10 15:18:45.480125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.893 [2024-11-10 15:18:46.024279] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.893 [2024-11-10 15:18:46.024349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.893 [2024-11-10 15:18:46.024364] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.893 [2024-11-10 15:18:46.024371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.893 [2024-11-10 15:18:46.024386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.893 [2024-11-10 15:18:46.024393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.893 "name": "Existed_Raid", 00:09:39.893 "uuid": "e1a2a6fd-355f-47b2-9a1e-ffb159dd11ef", 00:09:39.893 "strip_size_kb": 64, 00:09:39.893 "state": "configuring", 00:09:39.893 "raid_level": "concat", 00:09:39.893 "superblock": true, 00:09:39.893 "num_base_bdevs": 3, 00:09:39.893 "num_base_bdevs_discovered": 0, 00:09:39.893 "num_base_bdevs_operational": 3, 00:09:39.893 "base_bdevs_list": [ 00:09:39.893 { 00:09:39.893 "name": "BaseBdev1", 00:09:39.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.893 "is_configured": false, 00:09:39.893 "data_offset": 0, 00:09:39.893 "data_size": 0 00:09:39.893 }, 00:09:39.893 { 00:09:39.893 "name": "BaseBdev2", 00:09:39.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.893 "is_configured": false, 00:09:39.893 "data_offset": 0, 00:09:39.893 "data_size": 0 00:09:39.893 }, 00:09:39.893 { 00:09:39.893 "name": "BaseBdev3", 00:09:39.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.893 "is_configured": false, 00:09:39.893 "data_offset": 0, 00:09:39.893 "data_size": 0 00:09:39.893 } 00:09:39.893 ] 00:09:39.893 }' 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.893 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.153 [2024-11-10 15:18:46.428286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.153 [2024-11-10 15:18:46.428444] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.153 [2024-11-10 15:18:46.440317] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:40.153 [2024-11-10 15:18:46.440396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:40.153 [2024-11-10 15:18:46.440425] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.153 [2024-11-10 15:18:46.440446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.153 [2024-11-10 15:18:46.440477] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.153 [2024-11-10 15:18:46.440498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.153 [2024-11-10 15:18:46.467145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.153 BaseBdev1 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.153 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.153 [ 00:09:40.153 { 00:09:40.153 "name": "BaseBdev1", 00:09:40.153 "aliases": [ 00:09:40.153 "90c33796-44d1-4b6c-81aa-ce6b035f8013" 00:09:40.153 ], 00:09:40.153 "product_name": "Malloc disk", 00:09:40.153 "block_size": 512, 00:09:40.153 "num_blocks": 65536, 00:09:40.153 "uuid": "90c33796-44d1-4b6c-81aa-ce6b035f8013", 00:09:40.153 "assigned_rate_limits": { 00:09:40.153 "rw_ios_per_sec": 0, 00:09:40.154 "rw_mbytes_per_sec": 0, 00:09:40.154 "r_mbytes_per_sec": 0, 00:09:40.154 "w_mbytes_per_sec": 0 00:09:40.154 }, 00:09:40.154 "claimed": true, 00:09:40.154 "claim_type": "exclusive_write", 00:09:40.154 "zoned": false, 00:09:40.154 "supported_io_types": { 00:09:40.154 "read": true, 00:09:40.154 "write": true, 00:09:40.154 "unmap": true, 00:09:40.154 "flush": true, 00:09:40.154 "reset": true, 00:09:40.154 "nvme_admin": false, 00:09:40.154 "nvme_io": false, 00:09:40.154 "nvme_io_md": false, 00:09:40.154 "write_zeroes": true, 00:09:40.154 "zcopy": true, 00:09:40.154 "get_zone_info": false, 00:09:40.154 "zone_management": false, 00:09:40.154 "zone_append": false, 00:09:40.154 "compare": false, 00:09:40.154 "compare_and_write": false, 00:09:40.154 "abort": true, 00:09:40.154 "seek_hole": false, 00:09:40.154 "seek_data": false, 00:09:40.154 "copy": true, 00:09:40.154 "nvme_iov_md": false 00:09:40.154 }, 00:09:40.154 "memory_domains": [ 00:09:40.154 { 00:09:40.154 "dma_device_id": "system", 00:09:40.154 "dma_device_type": 1 00:09:40.154 }, 00:09:40.154 { 00:09:40.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.154 "dma_device_type": 2 00:09:40.154 } 00:09:40.154 ], 00:09:40.154 "driver_specific": {} 00:09:40.154 } 00:09:40.154 ] 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.154 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.413 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.413 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.413 "name": "Existed_Raid", 00:09:40.413 "uuid": "3c8ac61f-0b5a-4090-9e87-7d53d1629c1c", 00:09:40.413 "strip_size_kb": 64, 00:09:40.413 "state": "configuring", 00:09:40.413 "raid_level": "concat", 00:09:40.413 "superblock": true, 00:09:40.413 "num_base_bdevs": 3, 00:09:40.413 "num_base_bdevs_discovered": 1, 00:09:40.413 "num_base_bdevs_operational": 3, 00:09:40.413 "base_bdevs_list": [ 00:09:40.413 { 00:09:40.413 "name": "BaseBdev1", 00:09:40.413 "uuid": "90c33796-44d1-4b6c-81aa-ce6b035f8013", 00:09:40.413 "is_configured": true, 00:09:40.413 "data_offset": 2048, 00:09:40.413 "data_size": 63488 00:09:40.413 }, 00:09:40.413 { 00:09:40.413 "name": "BaseBdev2", 00:09:40.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.413 "is_configured": false, 00:09:40.413 "data_offset": 0, 00:09:40.413 "data_size": 0 00:09:40.413 }, 00:09:40.413 { 00:09:40.413 "name": "BaseBdev3", 00:09:40.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.413 "is_configured": false, 00:09:40.413 "data_offset": 0, 00:09:40.413 "data_size": 0 00:09:40.413 } 00:09:40.413 ] 00:09:40.413 }' 00:09:40.413 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.413 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.672 [2024-11-10 15:18:46.911372] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.672 [2024-11-10 15:18:46.911560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.672 [2024-11-10 15:18:46.923378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:40.672 [2024-11-10 15:18:46.925709] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:40.672 [2024-11-10 15:18:46.925786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.672 [2024-11-10 15:18:46.925819] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:40.672 [2024-11-10 15:18:46.925841] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.672 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.673 "name": "Existed_Raid", 00:09:40.673 "uuid": "4cbff2a7-60c4-4940-92db-8f4e09570992", 00:09:40.673 "strip_size_kb": 64, 00:09:40.673 "state": "configuring", 00:09:40.673 "raid_level": "concat", 00:09:40.673 "superblock": true, 00:09:40.673 "num_base_bdevs": 3, 00:09:40.673 "num_base_bdevs_discovered": 1, 00:09:40.673 "num_base_bdevs_operational": 3, 00:09:40.673 "base_bdevs_list": [ 00:09:40.673 { 00:09:40.673 "name": "BaseBdev1", 00:09:40.673 "uuid": "90c33796-44d1-4b6c-81aa-ce6b035f8013", 00:09:40.673 "is_configured": true, 00:09:40.673 "data_offset": 2048, 00:09:40.673 "data_size": 63488 00:09:40.673 }, 00:09:40.673 { 00:09:40.673 "name": "BaseBdev2", 00:09:40.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.673 "is_configured": false, 00:09:40.673 "data_offset": 0, 00:09:40.673 "data_size": 0 00:09:40.673 }, 00:09:40.673 { 00:09:40.673 "name": "BaseBdev3", 00:09:40.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.673 "is_configured": false, 00:09:40.673 "data_offset": 0, 00:09:40.673 "data_size": 0 00:09:40.673 } 00:09:40.673 ] 00:09:40.673 }' 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.673 15:18:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 [2024-11-10 15:18:47.364326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:41.242 BaseBdev2 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 [ 00:09:41.242 { 00:09:41.242 "name": "BaseBdev2", 00:09:41.242 "aliases": [ 00:09:41.242 "de651639-0406-4daf-8672-34d6e02ff5cc" 00:09:41.242 ], 00:09:41.242 "product_name": "Malloc disk", 00:09:41.242 "block_size": 512, 00:09:41.242 "num_blocks": 65536, 00:09:41.242 "uuid": "de651639-0406-4daf-8672-34d6e02ff5cc", 00:09:41.242 "assigned_rate_limits": { 00:09:41.242 "rw_ios_per_sec": 0, 00:09:41.242 "rw_mbytes_per_sec": 0, 00:09:41.242 "r_mbytes_per_sec": 0, 00:09:41.242 "w_mbytes_per_sec": 0 00:09:41.242 }, 00:09:41.242 "claimed": true, 00:09:41.242 "claim_type": "exclusive_write", 00:09:41.242 "zoned": false, 00:09:41.242 "supported_io_types": { 00:09:41.242 "read": true, 00:09:41.242 "write": true, 00:09:41.242 "unmap": true, 00:09:41.242 "flush": true, 00:09:41.242 "reset": true, 00:09:41.242 "nvme_admin": false, 00:09:41.242 "nvme_io": false, 00:09:41.242 "nvme_io_md": false, 00:09:41.242 "write_zeroes": true, 00:09:41.242 "zcopy": true, 00:09:41.242 "get_zone_info": false, 00:09:41.242 "zone_management": false, 00:09:41.242 "zone_append": false, 00:09:41.242 "compare": false, 00:09:41.242 "compare_and_write": false, 00:09:41.242 "abort": true, 00:09:41.242 "seek_hole": false, 00:09:41.242 "seek_data": false, 00:09:41.242 "copy": true, 00:09:41.242 "nvme_iov_md": false 00:09:41.242 }, 00:09:41.242 "memory_domains": [ 00:09:41.242 { 00:09:41.242 "dma_device_id": "system", 00:09:41.242 "dma_device_type": 1 00:09:41.242 }, 00:09:41.242 { 00:09:41.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.242 "dma_device_type": 2 00:09:41.242 } 00:09:41.242 ], 00:09:41.242 "driver_specific": {} 00:09:41.242 } 00:09:41.242 ] 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.242 "name": "Existed_Raid", 00:09:41.242 "uuid": "4cbff2a7-60c4-4940-92db-8f4e09570992", 00:09:41.242 "strip_size_kb": 64, 00:09:41.242 "state": "configuring", 00:09:41.242 "raid_level": "concat", 00:09:41.242 "superblock": true, 00:09:41.242 "num_base_bdevs": 3, 00:09:41.242 "num_base_bdevs_discovered": 2, 00:09:41.242 "num_base_bdevs_operational": 3, 00:09:41.242 "base_bdevs_list": [ 00:09:41.242 { 00:09:41.242 "name": "BaseBdev1", 00:09:41.242 "uuid": "90c33796-44d1-4b6c-81aa-ce6b035f8013", 00:09:41.242 "is_configured": true, 00:09:41.242 "data_offset": 2048, 00:09:41.242 "data_size": 63488 00:09:41.242 }, 00:09:41.242 { 00:09:41.242 "name": "BaseBdev2", 00:09:41.242 "uuid": "de651639-0406-4daf-8672-34d6e02ff5cc", 00:09:41.242 "is_configured": true, 00:09:41.242 "data_offset": 2048, 00:09:41.242 "data_size": 63488 00:09:41.242 }, 00:09:41.242 { 00:09:41.242 "name": "BaseBdev3", 00:09:41.242 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.242 "is_configured": false, 00:09:41.242 "data_offset": 0, 00:09:41.242 "data_size": 0 00:09:41.242 } 00:09:41.242 ] 00:09:41.242 }' 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.242 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.502 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.502 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.502 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.502 [2024-11-10 15:18:47.859383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.502 [2024-11-10 15:18:47.859653] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:41.502 [2024-11-10 15:18:47.859682] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.502 [2024-11-10 15:18:47.860137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:41.762 BaseBdev3 00:09:41.762 [2024-11-10 15:18:47.860310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:41.762 [2024-11-10 15:18:47.860329] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:09:41.762 [2024-11-10 15:18:47.860483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.762 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.762 [ 00:09:41.762 { 00:09:41.762 "name": "BaseBdev3", 00:09:41.762 "aliases": [ 00:09:41.762 "0b973208-a770-4e6b-aee5-6bedc2b3e90e" 00:09:41.762 ], 00:09:41.762 "product_name": "Malloc disk", 00:09:41.762 "block_size": 512, 00:09:41.762 "num_blocks": 65536, 00:09:41.762 "uuid": "0b973208-a770-4e6b-aee5-6bedc2b3e90e", 00:09:41.762 "assigned_rate_limits": { 00:09:41.762 "rw_ios_per_sec": 0, 00:09:41.762 "rw_mbytes_per_sec": 0, 00:09:41.762 "r_mbytes_per_sec": 0, 00:09:41.762 "w_mbytes_per_sec": 0 00:09:41.762 }, 00:09:41.762 "claimed": true, 00:09:41.762 "claim_type": "exclusive_write", 00:09:41.762 "zoned": false, 00:09:41.762 "supported_io_types": { 00:09:41.762 "read": true, 00:09:41.762 "write": true, 00:09:41.762 "unmap": true, 00:09:41.762 "flush": true, 00:09:41.762 "reset": true, 00:09:41.763 "nvme_admin": false, 00:09:41.763 "nvme_io": false, 00:09:41.763 "nvme_io_md": false, 00:09:41.763 "write_zeroes": true, 00:09:41.763 "zcopy": true, 00:09:41.763 "get_zone_info": false, 00:09:41.763 "zone_management": false, 00:09:41.763 "zone_append": false, 00:09:41.763 "compare": false, 00:09:41.763 "compare_and_write": false, 00:09:41.763 "abort": true, 00:09:41.763 "seek_hole": false, 00:09:41.763 "seek_data": false, 00:09:41.763 "copy": true, 00:09:41.763 "nvme_iov_md": false 00:09:41.763 }, 00:09:41.763 "memory_domains": [ 00:09:41.763 { 00:09:41.763 "dma_device_id": "system", 00:09:41.763 "dma_device_type": 1 00:09:41.763 }, 00:09:41.763 { 00:09:41.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.763 "dma_device_type": 2 00:09:41.763 } 00:09:41.763 ], 00:09:41.763 "driver_specific": {} 00:09:41.763 } 00:09:41.763 ] 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.763 "name": "Existed_Raid", 00:09:41.763 "uuid": "4cbff2a7-60c4-4940-92db-8f4e09570992", 00:09:41.763 "strip_size_kb": 64, 00:09:41.763 "state": "online", 00:09:41.763 "raid_level": "concat", 00:09:41.763 "superblock": true, 00:09:41.763 "num_base_bdevs": 3, 00:09:41.763 "num_base_bdevs_discovered": 3, 00:09:41.763 "num_base_bdevs_operational": 3, 00:09:41.763 "base_bdevs_list": [ 00:09:41.763 { 00:09:41.763 "name": "BaseBdev1", 00:09:41.763 "uuid": "90c33796-44d1-4b6c-81aa-ce6b035f8013", 00:09:41.763 "is_configured": true, 00:09:41.763 "data_offset": 2048, 00:09:41.763 "data_size": 63488 00:09:41.763 }, 00:09:41.763 { 00:09:41.763 "name": "BaseBdev2", 00:09:41.763 "uuid": "de651639-0406-4daf-8672-34d6e02ff5cc", 00:09:41.763 "is_configured": true, 00:09:41.763 "data_offset": 2048, 00:09:41.763 "data_size": 63488 00:09:41.763 }, 00:09:41.763 { 00:09:41.763 "name": "BaseBdev3", 00:09:41.763 "uuid": "0b973208-a770-4e6b-aee5-6bedc2b3e90e", 00:09:41.763 "is_configured": true, 00:09:41.763 "data_offset": 2048, 00:09:41.763 "data_size": 63488 00:09:41.763 } 00:09:41.763 ] 00:09:41.763 }' 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.763 15:18:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.023 [2024-11-10 15:18:48.355920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.023 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.283 "name": "Existed_Raid", 00:09:42.283 "aliases": [ 00:09:42.283 "4cbff2a7-60c4-4940-92db-8f4e09570992" 00:09:42.283 ], 00:09:42.283 "product_name": "Raid Volume", 00:09:42.283 "block_size": 512, 00:09:42.283 "num_blocks": 190464, 00:09:42.283 "uuid": "4cbff2a7-60c4-4940-92db-8f4e09570992", 00:09:42.283 "assigned_rate_limits": { 00:09:42.283 "rw_ios_per_sec": 0, 00:09:42.283 "rw_mbytes_per_sec": 0, 00:09:42.283 "r_mbytes_per_sec": 0, 00:09:42.283 "w_mbytes_per_sec": 0 00:09:42.283 }, 00:09:42.283 "claimed": false, 00:09:42.283 "zoned": false, 00:09:42.283 "supported_io_types": { 00:09:42.283 "read": true, 00:09:42.283 "write": true, 00:09:42.283 "unmap": true, 00:09:42.283 "flush": true, 00:09:42.283 "reset": true, 00:09:42.283 "nvme_admin": false, 00:09:42.283 "nvme_io": false, 00:09:42.283 "nvme_io_md": false, 00:09:42.283 "write_zeroes": true, 00:09:42.283 "zcopy": false, 00:09:42.283 "get_zone_info": false, 00:09:42.283 "zone_management": false, 00:09:42.283 "zone_append": false, 00:09:42.283 "compare": false, 00:09:42.283 "compare_and_write": false, 00:09:42.283 "abort": false, 00:09:42.283 "seek_hole": false, 00:09:42.283 "seek_data": false, 00:09:42.283 "copy": false, 00:09:42.283 "nvme_iov_md": false 00:09:42.283 }, 00:09:42.283 "memory_domains": [ 00:09:42.283 { 00:09:42.283 "dma_device_id": "system", 00:09:42.283 "dma_device_type": 1 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.283 "dma_device_type": 2 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "dma_device_id": "system", 00:09:42.283 "dma_device_type": 1 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.283 "dma_device_type": 2 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "dma_device_id": "system", 00:09:42.283 "dma_device_type": 1 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.283 "dma_device_type": 2 00:09:42.283 } 00:09:42.283 ], 00:09:42.283 "driver_specific": { 00:09:42.283 "raid": { 00:09:42.283 "uuid": "4cbff2a7-60c4-4940-92db-8f4e09570992", 00:09:42.283 "strip_size_kb": 64, 00:09:42.283 "state": "online", 00:09:42.283 "raid_level": "concat", 00:09:42.283 "superblock": true, 00:09:42.283 "num_base_bdevs": 3, 00:09:42.283 "num_base_bdevs_discovered": 3, 00:09:42.283 "num_base_bdevs_operational": 3, 00:09:42.283 "base_bdevs_list": [ 00:09:42.283 { 00:09:42.283 "name": "BaseBdev1", 00:09:42.283 "uuid": "90c33796-44d1-4b6c-81aa-ce6b035f8013", 00:09:42.283 "is_configured": true, 00:09:42.283 "data_offset": 2048, 00:09:42.283 "data_size": 63488 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "name": "BaseBdev2", 00:09:42.283 "uuid": "de651639-0406-4daf-8672-34d6e02ff5cc", 00:09:42.283 "is_configured": true, 00:09:42.283 "data_offset": 2048, 00:09:42.283 "data_size": 63488 00:09:42.283 }, 00:09:42.283 { 00:09:42.283 "name": "BaseBdev3", 00:09:42.283 "uuid": "0b973208-a770-4e6b-aee5-6bedc2b3e90e", 00:09:42.283 "is_configured": true, 00:09:42.283 "data_offset": 2048, 00:09:42.283 "data_size": 63488 00:09:42.283 } 00:09:42.283 ] 00:09:42.283 } 00:09:42.283 } 00:09:42.283 }' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:42.283 BaseBdev2 00:09:42.283 BaseBdev3' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.283 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.284 [2024-11-10 15:18:48.603729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:42.284 [2024-11-10 15:18:48.603862] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.284 [2024-11-10 15:18:48.603942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.284 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.544 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.544 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.544 "name": "Existed_Raid", 00:09:42.544 "uuid": "4cbff2a7-60c4-4940-92db-8f4e09570992", 00:09:42.544 "strip_size_kb": 64, 00:09:42.544 "state": "offline", 00:09:42.544 "raid_level": "concat", 00:09:42.544 "superblock": true, 00:09:42.544 "num_base_bdevs": 3, 00:09:42.544 "num_base_bdevs_discovered": 2, 00:09:42.544 "num_base_bdevs_operational": 2, 00:09:42.544 "base_bdevs_list": [ 00:09:42.544 { 00:09:42.544 "name": null, 00:09:42.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.544 "is_configured": false, 00:09:42.544 "data_offset": 0, 00:09:42.544 "data_size": 63488 00:09:42.544 }, 00:09:42.544 { 00:09:42.544 "name": "BaseBdev2", 00:09:42.544 "uuid": "de651639-0406-4daf-8672-34d6e02ff5cc", 00:09:42.544 "is_configured": true, 00:09:42.544 "data_offset": 2048, 00:09:42.544 "data_size": 63488 00:09:42.544 }, 00:09:42.544 { 00:09:42.544 "name": "BaseBdev3", 00:09:42.544 "uuid": "0b973208-a770-4e6b-aee5-6bedc2b3e90e", 00:09:42.544 "is_configured": true, 00:09:42.544 "data_offset": 2048, 00:09:42.544 "data_size": 63488 00:09:42.544 } 00:09:42.544 ] 00:09:42.544 }' 00:09:42.544 15:18:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.544 15:18:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.803 [2024-11-10 15:18:49.104747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.803 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.063 [2024-11-10 15:18:49.181022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:43.063 [2024-11-10 15:18:49.181094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.063 BaseBdev2 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.063 [ 00:09:43.063 { 00:09:43.063 "name": "BaseBdev2", 00:09:43.063 "aliases": [ 00:09:43.063 "fab11348-d8e1-4a1c-83d7-a99462073dc1" 00:09:43.063 ], 00:09:43.063 "product_name": "Malloc disk", 00:09:43.063 "block_size": 512, 00:09:43.063 "num_blocks": 65536, 00:09:43.063 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:43.063 "assigned_rate_limits": { 00:09:43.063 "rw_ios_per_sec": 0, 00:09:43.063 "rw_mbytes_per_sec": 0, 00:09:43.063 "r_mbytes_per_sec": 0, 00:09:43.063 "w_mbytes_per_sec": 0 00:09:43.063 }, 00:09:43.063 "claimed": false, 00:09:43.063 "zoned": false, 00:09:43.063 "supported_io_types": { 00:09:43.063 "read": true, 00:09:43.063 "write": true, 00:09:43.063 "unmap": true, 00:09:43.063 "flush": true, 00:09:43.063 "reset": true, 00:09:43.063 "nvme_admin": false, 00:09:43.063 "nvme_io": false, 00:09:43.063 "nvme_io_md": false, 00:09:43.063 "write_zeroes": true, 00:09:43.063 "zcopy": true, 00:09:43.063 "get_zone_info": false, 00:09:43.063 "zone_management": false, 00:09:43.063 "zone_append": false, 00:09:43.063 "compare": false, 00:09:43.063 "compare_and_write": false, 00:09:43.063 "abort": true, 00:09:43.063 "seek_hole": false, 00:09:43.063 "seek_data": false, 00:09:43.063 "copy": true, 00:09:43.063 "nvme_iov_md": false 00:09:43.063 }, 00:09:43.063 "memory_domains": [ 00:09:43.063 { 00:09:43.063 "dma_device_id": "system", 00:09:43.063 "dma_device_type": 1 00:09:43.063 }, 00:09:43.063 { 00:09:43.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.063 "dma_device_type": 2 00:09:43.063 } 00:09:43.063 ], 00:09:43.063 "driver_specific": {} 00:09:43.063 } 00:09:43.063 ] 00:09:43.063 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 BaseBdev3 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 [ 00:09:43.064 { 00:09:43.064 "name": "BaseBdev3", 00:09:43.064 "aliases": [ 00:09:43.064 "e6cac344-2fd8-42d5-b1ec-45b2339716f8" 00:09:43.064 ], 00:09:43.064 "product_name": "Malloc disk", 00:09:43.064 "block_size": 512, 00:09:43.064 "num_blocks": 65536, 00:09:43.064 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:43.064 "assigned_rate_limits": { 00:09:43.064 "rw_ios_per_sec": 0, 00:09:43.064 "rw_mbytes_per_sec": 0, 00:09:43.064 "r_mbytes_per_sec": 0, 00:09:43.064 "w_mbytes_per_sec": 0 00:09:43.064 }, 00:09:43.064 "claimed": false, 00:09:43.064 "zoned": false, 00:09:43.064 "supported_io_types": { 00:09:43.064 "read": true, 00:09:43.064 "write": true, 00:09:43.064 "unmap": true, 00:09:43.064 "flush": true, 00:09:43.064 "reset": true, 00:09:43.064 "nvme_admin": false, 00:09:43.064 "nvme_io": false, 00:09:43.064 "nvme_io_md": false, 00:09:43.064 "write_zeroes": true, 00:09:43.064 "zcopy": true, 00:09:43.064 "get_zone_info": false, 00:09:43.064 "zone_management": false, 00:09:43.064 "zone_append": false, 00:09:43.064 "compare": false, 00:09:43.064 "compare_and_write": false, 00:09:43.064 "abort": true, 00:09:43.064 "seek_hole": false, 00:09:43.064 "seek_data": false, 00:09:43.064 "copy": true, 00:09:43.064 "nvme_iov_md": false 00:09:43.064 }, 00:09:43.064 "memory_domains": [ 00:09:43.064 { 00:09:43.064 "dma_device_id": "system", 00:09:43.064 "dma_device_type": 1 00:09:43.064 }, 00:09:43.064 { 00:09:43.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.064 "dma_device_type": 2 00:09:43.064 } 00:09:43.064 ], 00:09:43.064 "driver_specific": {} 00:09:43.064 } 00:09:43.064 ] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 [2024-11-10 15:18:49.381707] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:43.064 [2024-11-10 15:18:49.381843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:43.064 [2024-11-10 15:18:49.381883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.064 [2024-11-10 15:18:49.384093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.064 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.324 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.324 "name": "Existed_Raid", 00:09:43.324 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:43.324 "strip_size_kb": 64, 00:09:43.324 "state": "configuring", 00:09:43.324 "raid_level": "concat", 00:09:43.324 "superblock": true, 00:09:43.324 "num_base_bdevs": 3, 00:09:43.324 "num_base_bdevs_discovered": 2, 00:09:43.324 "num_base_bdevs_operational": 3, 00:09:43.324 "base_bdevs_list": [ 00:09:43.324 { 00:09:43.324 "name": "BaseBdev1", 00:09:43.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.324 "is_configured": false, 00:09:43.324 "data_offset": 0, 00:09:43.324 "data_size": 0 00:09:43.324 }, 00:09:43.324 { 00:09:43.324 "name": "BaseBdev2", 00:09:43.324 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:43.324 "is_configured": true, 00:09:43.324 "data_offset": 2048, 00:09:43.324 "data_size": 63488 00:09:43.324 }, 00:09:43.324 { 00:09:43.324 "name": "BaseBdev3", 00:09:43.324 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:43.324 "is_configured": true, 00:09:43.324 "data_offset": 2048, 00:09:43.324 "data_size": 63488 00:09:43.324 } 00:09:43.324 ] 00:09:43.324 }' 00:09:43.324 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.324 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.583 [2024-11-10 15:18:49.837846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.583 "name": "Existed_Raid", 00:09:43.583 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:43.583 "strip_size_kb": 64, 00:09:43.583 "state": "configuring", 00:09:43.583 "raid_level": "concat", 00:09:43.583 "superblock": true, 00:09:43.583 "num_base_bdevs": 3, 00:09:43.583 "num_base_bdevs_discovered": 1, 00:09:43.583 "num_base_bdevs_operational": 3, 00:09:43.583 "base_bdevs_list": [ 00:09:43.583 { 00:09:43.583 "name": "BaseBdev1", 00:09:43.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.583 "is_configured": false, 00:09:43.583 "data_offset": 0, 00:09:43.583 "data_size": 0 00:09:43.583 }, 00:09:43.583 { 00:09:43.583 "name": null, 00:09:43.583 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:43.583 "is_configured": false, 00:09:43.583 "data_offset": 0, 00:09:43.583 "data_size": 63488 00:09:43.583 }, 00:09:43.583 { 00:09:43.583 "name": "BaseBdev3", 00:09:43.583 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:43.583 "is_configured": true, 00:09:43.583 "data_offset": 2048, 00:09:43.583 "data_size": 63488 00:09:43.583 } 00:09:43.583 ] 00:09:43.583 }' 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.583 15:18:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.152 [2024-11-10 15:18:50.286808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.152 BaseBdev1 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:44.152 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.153 [ 00:09:44.153 { 00:09:44.153 "name": "BaseBdev1", 00:09:44.153 "aliases": [ 00:09:44.153 "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9" 00:09:44.153 ], 00:09:44.153 "product_name": "Malloc disk", 00:09:44.153 "block_size": 512, 00:09:44.153 "num_blocks": 65536, 00:09:44.153 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:44.153 "assigned_rate_limits": { 00:09:44.153 "rw_ios_per_sec": 0, 00:09:44.153 "rw_mbytes_per_sec": 0, 00:09:44.153 "r_mbytes_per_sec": 0, 00:09:44.153 "w_mbytes_per_sec": 0 00:09:44.153 }, 00:09:44.153 "claimed": true, 00:09:44.153 "claim_type": "exclusive_write", 00:09:44.153 "zoned": false, 00:09:44.153 "supported_io_types": { 00:09:44.153 "read": true, 00:09:44.153 "write": true, 00:09:44.153 "unmap": true, 00:09:44.153 "flush": true, 00:09:44.153 "reset": true, 00:09:44.153 "nvme_admin": false, 00:09:44.153 "nvme_io": false, 00:09:44.153 "nvme_io_md": false, 00:09:44.153 "write_zeroes": true, 00:09:44.153 "zcopy": true, 00:09:44.153 "get_zone_info": false, 00:09:44.153 "zone_management": false, 00:09:44.153 "zone_append": false, 00:09:44.153 "compare": false, 00:09:44.153 "compare_and_write": false, 00:09:44.153 "abort": true, 00:09:44.153 "seek_hole": false, 00:09:44.153 "seek_data": false, 00:09:44.153 "copy": true, 00:09:44.153 "nvme_iov_md": false 00:09:44.153 }, 00:09:44.153 "memory_domains": [ 00:09:44.153 { 00:09:44.153 "dma_device_id": "system", 00:09:44.153 "dma_device_type": 1 00:09:44.153 }, 00:09:44.153 { 00:09:44.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.153 "dma_device_type": 2 00:09:44.153 } 00:09:44.153 ], 00:09:44.153 "driver_specific": {} 00:09:44.153 } 00:09:44.153 ] 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.153 "name": "Existed_Raid", 00:09:44.153 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:44.153 "strip_size_kb": 64, 00:09:44.153 "state": "configuring", 00:09:44.153 "raid_level": "concat", 00:09:44.153 "superblock": true, 00:09:44.153 "num_base_bdevs": 3, 00:09:44.153 "num_base_bdevs_discovered": 2, 00:09:44.153 "num_base_bdevs_operational": 3, 00:09:44.153 "base_bdevs_list": [ 00:09:44.153 { 00:09:44.153 "name": "BaseBdev1", 00:09:44.153 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:44.153 "is_configured": true, 00:09:44.153 "data_offset": 2048, 00:09:44.153 "data_size": 63488 00:09:44.153 }, 00:09:44.153 { 00:09:44.153 "name": null, 00:09:44.153 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:44.153 "is_configured": false, 00:09:44.153 "data_offset": 0, 00:09:44.153 "data_size": 63488 00:09:44.153 }, 00:09:44.153 { 00:09:44.153 "name": "BaseBdev3", 00:09:44.153 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:44.153 "is_configured": true, 00:09:44.153 "data_offset": 2048, 00:09:44.153 "data_size": 63488 00:09:44.153 } 00:09:44.153 ] 00:09:44.153 }' 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.153 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.723 [2024-11-10 15:18:50.839045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.723 "name": "Existed_Raid", 00:09:44.723 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:44.723 "strip_size_kb": 64, 00:09:44.723 "state": "configuring", 00:09:44.723 "raid_level": "concat", 00:09:44.723 "superblock": true, 00:09:44.723 "num_base_bdevs": 3, 00:09:44.723 "num_base_bdevs_discovered": 1, 00:09:44.723 "num_base_bdevs_operational": 3, 00:09:44.723 "base_bdevs_list": [ 00:09:44.723 { 00:09:44.723 "name": "BaseBdev1", 00:09:44.723 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:44.723 "is_configured": true, 00:09:44.723 "data_offset": 2048, 00:09:44.723 "data_size": 63488 00:09:44.723 }, 00:09:44.723 { 00:09:44.723 "name": null, 00:09:44.723 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:44.723 "is_configured": false, 00:09:44.723 "data_offset": 0, 00:09:44.723 "data_size": 63488 00:09:44.723 }, 00:09:44.723 { 00:09:44.723 "name": null, 00:09:44.723 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:44.723 "is_configured": false, 00:09:44.723 "data_offset": 0, 00:09:44.723 "data_size": 63488 00:09:44.723 } 00:09:44.723 ] 00:09:44.723 }' 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.723 15:18:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.983 [2024-11-10 15:18:51.335231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.983 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.244 "name": "Existed_Raid", 00:09:45.244 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:45.244 "strip_size_kb": 64, 00:09:45.244 "state": "configuring", 00:09:45.244 "raid_level": "concat", 00:09:45.244 "superblock": true, 00:09:45.244 "num_base_bdevs": 3, 00:09:45.244 "num_base_bdevs_discovered": 2, 00:09:45.244 "num_base_bdevs_operational": 3, 00:09:45.244 "base_bdevs_list": [ 00:09:45.244 { 00:09:45.244 "name": "BaseBdev1", 00:09:45.244 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:45.244 "is_configured": true, 00:09:45.244 "data_offset": 2048, 00:09:45.244 "data_size": 63488 00:09:45.244 }, 00:09:45.244 { 00:09:45.244 "name": null, 00:09:45.244 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:45.244 "is_configured": false, 00:09:45.244 "data_offset": 0, 00:09:45.244 "data_size": 63488 00:09:45.244 }, 00:09:45.244 { 00:09:45.244 "name": "BaseBdev3", 00:09:45.244 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:45.244 "is_configured": true, 00:09:45.244 "data_offset": 2048, 00:09:45.244 "data_size": 63488 00:09:45.244 } 00:09:45.244 ] 00:09:45.244 }' 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.244 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.504 [2024-11-10 15:18:51.751351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.504 "name": "Existed_Raid", 00:09:45.504 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:45.504 "strip_size_kb": 64, 00:09:45.504 "state": "configuring", 00:09:45.504 "raid_level": "concat", 00:09:45.504 "superblock": true, 00:09:45.504 "num_base_bdevs": 3, 00:09:45.504 "num_base_bdevs_discovered": 1, 00:09:45.504 "num_base_bdevs_operational": 3, 00:09:45.504 "base_bdevs_list": [ 00:09:45.504 { 00:09:45.504 "name": null, 00:09:45.504 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:45.504 "is_configured": false, 00:09:45.504 "data_offset": 0, 00:09:45.504 "data_size": 63488 00:09:45.504 }, 00:09:45.504 { 00:09:45.504 "name": null, 00:09:45.504 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:45.504 "is_configured": false, 00:09:45.504 "data_offset": 0, 00:09:45.504 "data_size": 63488 00:09:45.504 }, 00:09:45.504 { 00:09:45.504 "name": "BaseBdev3", 00:09:45.504 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:45.504 "is_configured": true, 00:09:45.504 "data_offset": 2048, 00:09:45.504 "data_size": 63488 00:09:45.504 } 00:09:45.504 ] 00:09:45.504 }' 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.504 15:18:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.074 [2024-11-10 15:18:52.294847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.074 "name": "Existed_Raid", 00:09:46.074 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:46.074 "strip_size_kb": 64, 00:09:46.074 "state": "configuring", 00:09:46.074 "raid_level": "concat", 00:09:46.074 "superblock": true, 00:09:46.074 "num_base_bdevs": 3, 00:09:46.074 "num_base_bdevs_discovered": 2, 00:09:46.074 "num_base_bdevs_operational": 3, 00:09:46.074 "base_bdevs_list": [ 00:09:46.074 { 00:09:46.074 "name": null, 00:09:46.074 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:46.074 "is_configured": false, 00:09:46.074 "data_offset": 0, 00:09:46.074 "data_size": 63488 00:09:46.074 }, 00:09:46.074 { 00:09:46.074 "name": "BaseBdev2", 00:09:46.074 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:46.074 "is_configured": true, 00:09:46.074 "data_offset": 2048, 00:09:46.074 "data_size": 63488 00:09:46.074 }, 00:09:46.074 { 00:09:46.074 "name": "BaseBdev3", 00:09:46.074 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:46.074 "is_configured": true, 00:09:46.074 "data_offset": 2048, 00:09:46.074 "data_size": 63488 00:09:46.074 } 00:09:46.074 ] 00:09:46.074 }' 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.074 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4aff910a-ab4f-429c-8b8e-bb8e3deef2d9 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.644 [2024-11-10 15:18:52.847710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:46.644 [2024-11-10 15:18:52.847907] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.644 [2024-11-10 15:18:52.847920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:46.644 [2024-11-10 15:18:52.848221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:09:46.644 NewBaseBdev 00:09:46.644 [2024-11-10 15:18:52.848351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.644 [2024-11-10 15:18:52.848367] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.644 [2024-11-10 15:18:52.848491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:09:46.644 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.645 [ 00:09:46.645 { 00:09:46.645 "name": "NewBaseBdev", 00:09:46.645 "aliases": [ 00:09:46.645 "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9" 00:09:46.645 ], 00:09:46.645 "product_name": "Malloc disk", 00:09:46.645 "block_size": 512, 00:09:46.645 "num_blocks": 65536, 00:09:46.645 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:46.645 "assigned_rate_limits": { 00:09:46.645 "rw_ios_per_sec": 0, 00:09:46.645 "rw_mbytes_per_sec": 0, 00:09:46.645 "r_mbytes_per_sec": 0, 00:09:46.645 "w_mbytes_per_sec": 0 00:09:46.645 }, 00:09:46.645 "claimed": true, 00:09:46.645 "claim_type": "exclusive_write", 00:09:46.645 "zoned": false, 00:09:46.645 "supported_io_types": { 00:09:46.645 "read": true, 00:09:46.645 "write": true, 00:09:46.645 "unmap": true, 00:09:46.645 "flush": true, 00:09:46.645 "reset": true, 00:09:46.645 "nvme_admin": false, 00:09:46.645 "nvme_io": false, 00:09:46.645 "nvme_io_md": false, 00:09:46.645 "write_zeroes": true, 00:09:46.645 "zcopy": true, 00:09:46.645 "get_zone_info": false, 00:09:46.645 "zone_management": false, 00:09:46.645 "zone_append": false, 00:09:46.645 "compare": false, 00:09:46.645 "compare_and_write": false, 00:09:46.645 "abort": true, 00:09:46.645 "seek_hole": false, 00:09:46.645 "seek_data": false, 00:09:46.645 "copy": true, 00:09:46.645 "nvme_iov_md": false 00:09:46.645 }, 00:09:46.645 "memory_domains": [ 00:09:46.645 { 00:09:46.645 "dma_device_id": "system", 00:09:46.645 "dma_device_type": 1 00:09:46.645 }, 00:09:46.645 { 00:09:46.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.645 "dma_device_type": 2 00:09:46.645 } 00:09:46.645 ], 00:09:46.645 "driver_specific": {} 00:09:46.645 } 00:09:46.645 ] 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.645 "name": "Existed_Raid", 00:09:46.645 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:46.645 "strip_size_kb": 64, 00:09:46.645 "state": "online", 00:09:46.645 "raid_level": "concat", 00:09:46.645 "superblock": true, 00:09:46.645 "num_base_bdevs": 3, 00:09:46.645 "num_base_bdevs_discovered": 3, 00:09:46.645 "num_base_bdevs_operational": 3, 00:09:46.645 "base_bdevs_list": [ 00:09:46.645 { 00:09:46.645 "name": "NewBaseBdev", 00:09:46.645 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:46.645 "is_configured": true, 00:09:46.645 "data_offset": 2048, 00:09:46.645 "data_size": 63488 00:09:46.645 }, 00:09:46.645 { 00:09:46.645 "name": "BaseBdev2", 00:09:46.645 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:46.645 "is_configured": true, 00:09:46.645 "data_offset": 2048, 00:09:46.645 "data_size": 63488 00:09:46.645 }, 00:09:46.645 { 00:09:46.645 "name": "BaseBdev3", 00:09:46.645 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:46.645 "is_configured": true, 00:09:46.645 "data_offset": 2048, 00:09:46.645 "data_size": 63488 00:09:46.645 } 00:09:46.645 ] 00:09:46.645 }' 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.645 15:18:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.213 [2024-11-10 15:18:53.324302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:47.213 "name": "Existed_Raid", 00:09:47.213 "aliases": [ 00:09:47.213 "7acce34b-984f-4e64-ad50-e6597af6043e" 00:09:47.213 ], 00:09:47.213 "product_name": "Raid Volume", 00:09:47.213 "block_size": 512, 00:09:47.213 "num_blocks": 190464, 00:09:47.213 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:47.213 "assigned_rate_limits": { 00:09:47.213 "rw_ios_per_sec": 0, 00:09:47.213 "rw_mbytes_per_sec": 0, 00:09:47.213 "r_mbytes_per_sec": 0, 00:09:47.213 "w_mbytes_per_sec": 0 00:09:47.213 }, 00:09:47.213 "claimed": false, 00:09:47.213 "zoned": false, 00:09:47.213 "supported_io_types": { 00:09:47.213 "read": true, 00:09:47.213 "write": true, 00:09:47.213 "unmap": true, 00:09:47.213 "flush": true, 00:09:47.213 "reset": true, 00:09:47.213 "nvme_admin": false, 00:09:47.213 "nvme_io": false, 00:09:47.213 "nvme_io_md": false, 00:09:47.213 "write_zeroes": true, 00:09:47.213 "zcopy": false, 00:09:47.213 "get_zone_info": false, 00:09:47.213 "zone_management": false, 00:09:47.213 "zone_append": false, 00:09:47.213 "compare": false, 00:09:47.213 "compare_and_write": false, 00:09:47.213 "abort": false, 00:09:47.213 "seek_hole": false, 00:09:47.213 "seek_data": false, 00:09:47.213 "copy": false, 00:09:47.213 "nvme_iov_md": false 00:09:47.213 }, 00:09:47.213 "memory_domains": [ 00:09:47.213 { 00:09:47.213 "dma_device_id": "system", 00:09:47.213 "dma_device_type": 1 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.213 "dma_device_type": 2 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "dma_device_id": "system", 00:09:47.213 "dma_device_type": 1 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.213 "dma_device_type": 2 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "dma_device_id": "system", 00:09:47.213 "dma_device_type": 1 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.213 "dma_device_type": 2 00:09:47.213 } 00:09:47.213 ], 00:09:47.213 "driver_specific": { 00:09:47.213 "raid": { 00:09:47.213 "uuid": "7acce34b-984f-4e64-ad50-e6597af6043e", 00:09:47.213 "strip_size_kb": 64, 00:09:47.213 "state": "online", 00:09:47.213 "raid_level": "concat", 00:09:47.213 "superblock": true, 00:09:47.213 "num_base_bdevs": 3, 00:09:47.213 "num_base_bdevs_discovered": 3, 00:09:47.213 "num_base_bdevs_operational": 3, 00:09:47.213 "base_bdevs_list": [ 00:09:47.213 { 00:09:47.213 "name": "NewBaseBdev", 00:09:47.213 "uuid": "4aff910a-ab4f-429c-8b8e-bb8e3deef2d9", 00:09:47.213 "is_configured": true, 00:09:47.213 "data_offset": 2048, 00:09:47.213 "data_size": 63488 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "name": "BaseBdev2", 00:09:47.213 "uuid": "fab11348-d8e1-4a1c-83d7-a99462073dc1", 00:09:47.213 "is_configured": true, 00:09:47.213 "data_offset": 2048, 00:09:47.213 "data_size": 63488 00:09:47.213 }, 00:09:47.213 { 00:09:47.213 "name": "BaseBdev3", 00:09:47.213 "uuid": "e6cac344-2fd8-42d5-b1ec-45b2339716f8", 00:09:47.213 "is_configured": true, 00:09:47.213 "data_offset": 2048, 00:09:47.213 "data_size": 63488 00:09:47.213 } 00:09:47.213 ] 00:09:47.213 } 00:09:47.213 } 00:09:47.213 }' 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.213 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:47.213 BaseBdev2 00:09:47.213 BaseBdev3' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.214 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.473 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.474 [2024-11-10 15:18:53.583974] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.474 [2024-11-10 15:18:53.584040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.474 [2024-11-10 15:18:53.584136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.474 [2024-11-10 15:18:53.584199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.474 [2024-11-10 15:18:53.584211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78734 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 78734 ']' 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 78734 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78734 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78734' 00:09:47.474 killing process with pid 78734 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 78734 00:09:47.474 [2024-11-10 15:18:53.634696] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.474 15:18:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 78734 00:09:47.474 [2024-11-10 15:18:53.695325] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.740 15:18:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:47.740 00:09:47.740 real 0m8.930s 00:09:47.740 user 0m14.962s 00:09:47.740 sys 0m1.869s 00:09:47.740 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.740 15:18:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:47.740 ************************************ 00:09:47.740 END TEST raid_state_function_test_sb 00:09:47.740 ************************************ 00:09:47.740 15:18:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:47.740 15:18:54 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:47.740 15:18:54 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.740 15:18:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.014 ************************************ 00:09:48.014 START TEST raid_superblock_test 00:09:48.014 ************************************ 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 3 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:48.014 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79332 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79332 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 79332 ']' 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:48.015 15:18:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.015 [2024-11-10 15:18:54.191808] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:48.015 [2024-11-10 15:18:54.191925] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79332 ] 00:09:48.015 [2024-11-10 15:18:54.323562] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:48.015 [2024-11-10 15:18:54.344153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.274 [2024-11-10 15:18:54.387156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.274 [2024-11-10 15:18:54.463411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.274 [2024-11-10 15:18:54.463456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.843 malloc1 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.843 [2024-11-10 15:18:55.054361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:48.843 [2024-11-10 15:18:55.054520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.843 [2024-11-10 15:18:55.054566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:48.843 [2024-11-10 15:18:55.054599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.843 [2024-11-10 15:18:55.057058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.843 [2024-11-10 15:18:55.057125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:48.843 pt1 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:48.843 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.844 malloc2 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.844 [2024-11-10 15:18:55.092860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.844 [2024-11-10 15:18:55.092917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.844 [2024-11-10 15:18:55.092938] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:48.844 [2024-11-10 15:18:55.092947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.844 [2024-11-10 15:18:55.095393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.844 [2024-11-10 15:18:55.095483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.844 pt2 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.844 malloc3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.844 [2024-11-10 15:18:55.127441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:48.844 [2024-11-10 15:18:55.127552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.844 [2024-11-10 15:18:55.127590] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:48.844 [2024-11-10 15:18:55.127618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.844 [2024-11-10 15:18:55.129947] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.844 [2024-11-10 15:18:55.130022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:48.844 pt3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.844 [2024-11-10 15:18:55.139485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.844 [2024-11-10 15:18:55.141614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.844 [2024-11-10 15:18:55.141715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:48.844 [2024-11-10 15:18:55.141872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:09:48.844 [2024-11-10 15:18:55.141923] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.844 [2024-11-10 15:18:55.142238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:48.844 [2024-11-10 15:18:55.142415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:09:48.844 [2024-11-10 15:18:55.142455] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:09:48.844 [2024-11-10 15:18:55.142608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.844 "name": "raid_bdev1", 00:09:48.844 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:48.844 "strip_size_kb": 64, 00:09:48.844 "state": "online", 00:09:48.844 "raid_level": "concat", 00:09:48.844 "superblock": true, 00:09:48.844 "num_base_bdevs": 3, 00:09:48.844 "num_base_bdevs_discovered": 3, 00:09:48.844 "num_base_bdevs_operational": 3, 00:09:48.844 "base_bdevs_list": [ 00:09:48.844 { 00:09:48.844 "name": "pt1", 00:09:48.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.844 "is_configured": true, 00:09:48.844 "data_offset": 2048, 00:09:48.844 "data_size": 63488 00:09:48.844 }, 00:09:48.844 { 00:09:48.844 "name": "pt2", 00:09:48.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.844 "is_configured": true, 00:09:48.844 "data_offset": 2048, 00:09:48.844 "data_size": 63488 00:09:48.844 }, 00:09:48.844 { 00:09:48.844 "name": "pt3", 00:09:48.844 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.844 "is_configured": true, 00:09:48.844 "data_offset": 2048, 00:09:48.844 "data_size": 63488 00:09:48.844 } 00:09:48.844 ] 00:09:48.844 }' 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.844 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.413 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:49.413 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:49.413 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.413 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.413 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.414 [2024-11-10 15:18:55.643969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.414 "name": "raid_bdev1", 00:09:49.414 "aliases": [ 00:09:49.414 "35c7c228-6ed9-4012-af2f-93298737cc31" 00:09:49.414 ], 00:09:49.414 "product_name": "Raid Volume", 00:09:49.414 "block_size": 512, 00:09:49.414 "num_blocks": 190464, 00:09:49.414 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:49.414 "assigned_rate_limits": { 00:09:49.414 "rw_ios_per_sec": 0, 00:09:49.414 "rw_mbytes_per_sec": 0, 00:09:49.414 "r_mbytes_per_sec": 0, 00:09:49.414 "w_mbytes_per_sec": 0 00:09:49.414 }, 00:09:49.414 "claimed": false, 00:09:49.414 "zoned": false, 00:09:49.414 "supported_io_types": { 00:09:49.414 "read": true, 00:09:49.414 "write": true, 00:09:49.414 "unmap": true, 00:09:49.414 "flush": true, 00:09:49.414 "reset": true, 00:09:49.414 "nvme_admin": false, 00:09:49.414 "nvme_io": false, 00:09:49.414 "nvme_io_md": false, 00:09:49.414 "write_zeroes": true, 00:09:49.414 "zcopy": false, 00:09:49.414 "get_zone_info": false, 00:09:49.414 "zone_management": false, 00:09:49.414 "zone_append": false, 00:09:49.414 "compare": false, 00:09:49.414 "compare_and_write": false, 00:09:49.414 "abort": false, 00:09:49.414 "seek_hole": false, 00:09:49.414 "seek_data": false, 00:09:49.414 "copy": false, 00:09:49.414 "nvme_iov_md": false 00:09:49.414 }, 00:09:49.414 "memory_domains": [ 00:09:49.414 { 00:09:49.414 "dma_device_id": "system", 00:09:49.414 "dma_device_type": 1 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.414 "dma_device_type": 2 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "dma_device_id": "system", 00:09:49.414 "dma_device_type": 1 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.414 "dma_device_type": 2 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "dma_device_id": "system", 00:09:49.414 "dma_device_type": 1 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.414 "dma_device_type": 2 00:09:49.414 } 00:09:49.414 ], 00:09:49.414 "driver_specific": { 00:09:49.414 "raid": { 00:09:49.414 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:49.414 "strip_size_kb": 64, 00:09:49.414 "state": "online", 00:09:49.414 "raid_level": "concat", 00:09:49.414 "superblock": true, 00:09:49.414 "num_base_bdevs": 3, 00:09:49.414 "num_base_bdevs_discovered": 3, 00:09:49.414 "num_base_bdevs_operational": 3, 00:09:49.414 "base_bdevs_list": [ 00:09:49.414 { 00:09:49.414 "name": "pt1", 00:09:49.414 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.414 "is_configured": true, 00:09:49.414 "data_offset": 2048, 00:09:49.414 "data_size": 63488 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "name": "pt2", 00:09:49.414 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.414 "is_configured": true, 00:09:49.414 "data_offset": 2048, 00:09:49.414 "data_size": 63488 00:09:49.414 }, 00:09:49.414 { 00:09:49.414 "name": "pt3", 00:09:49.414 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.414 "is_configured": true, 00:09:49.414 "data_offset": 2048, 00:09:49.414 "data_size": 63488 00:09:49.414 } 00:09:49.414 ] 00:09:49.414 } 00:09:49.414 } 00:09:49.414 }' 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:49.414 pt2 00:09:49.414 pt3' 00:09:49.414 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.674 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.675 [2024-11-10 15:18:55.927934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=35c7c228-6ed9-4012-af2f-93298737cc31 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 35c7c228-6ed9-4012-af2f-93298737cc31 ']' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.675 [2024-11-10 15:18:55.971663] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.675 [2024-11-10 15:18:55.971774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.675 [2024-11-10 15:18:55.971881] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.675 [2024-11-10 15:18:55.971965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.675 [2024-11-10 15:18:55.971977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.675 15:18:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.675 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:49.675 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:49.675 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.675 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:49.675 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.675 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.935 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.935 [2024-11-10 15:18:56.115758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:49.935 [2024-11-10 15:18:56.118058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:49.935 [2024-11-10 15:18:56.118113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:49.935 [2024-11-10 15:18:56.118163] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:49.935 [2024-11-10 15:18:56.118218] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:49.936 [2024-11-10 15:18:56.118235] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:49.936 [2024-11-10 15:18:56.118250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.936 [2024-11-10 15:18:56.118266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:09:49.936 request: 00:09:49.936 { 00:09:49.936 "name": "raid_bdev1", 00:09:49.936 "raid_level": "concat", 00:09:49.936 "base_bdevs": [ 00:09:49.936 "malloc1", 00:09:49.936 "malloc2", 00:09:49.936 "malloc3" 00:09:49.936 ], 00:09:49.936 "strip_size_kb": 64, 00:09:49.936 "superblock": false, 00:09:49.936 "method": "bdev_raid_create", 00:09:49.936 "req_id": 1 00:09:49.936 } 00:09:49.936 Got JSON-RPC error response 00:09:49.936 response: 00:09:49.936 { 00:09:49.936 "code": -17, 00:09:49.936 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:49.936 } 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 [2024-11-10 15:18:56.183718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:49.936 [2024-11-10 15:18:56.183823] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.936 [2024-11-10 15:18:56.183863] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:49.936 [2024-11-10 15:18:56.183893] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.936 [2024-11-10 15:18:56.186438] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.936 [2024-11-10 15:18:56.186512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:49.936 [2024-11-10 15:18:56.186613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:49.936 [2024-11-10 15:18:56.186654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:49.936 pt1 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.936 "name": "raid_bdev1", 00:09:49.936 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:49.936 "strip_size_kb": 64, 00:09:49.936 "state": "configuring", 00:09:49.936 "raid_level": "concat", 00:09:49.936 "superblock": true, 00:09:49.936 "num_base_bdevs": 3, 00:09:49.936 "num_base_bdevs_discovered": 1, 00:09:49.936 "num_base_bdevs_operational": 3, 00:09:49.936 "base_bdevs_list": [ 00:09:49.936 { 00:09:49.936 "name": "pt1", 00:09:49.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:49.936 "is_configured": true, 00:09:49.936 "data_offset": 2048, 00:09:49.936 "data_size": 63488 00:09:49.936 }, 00:09:49.936 { 00:09:49.936 "name": null, 00:09:49.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:49.936 "is_configured": false, 00:09:49.936 "data_offset": 2048, 00:09:49.936 "data_size": 63488 00:09:49.936 }, 00:09:49.936 { 00:09:49.936 "name": null, 00:09:49.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:49.936 "is_configured": false, 00:09:49.936 "data_offset": 2048, 00:09:49.936 "data_size": 63488 00:09:49.936 } 00:09:49.936 ] 00:09:49.936 }' 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.936 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 [2024-11-10 15:18:56.627922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.505 [2024-11-10 15:18:56.628112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.505 [2024-11-10 15:18:56.628162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:50.505 [2024-11-10 15:18:56.628196] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.505 [2024-11-10 15:18:56.628726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.505 [2024-11-10 15:18:56.628785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.505 [2024-11-10 15:18:56.628909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.505 [2024-11-10 15:18:56.628963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.505 pt2 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 [2024-11-10 15:18:56.635932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.505 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.505 "name": "raid_bdev1", 00:09:50.505 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:50.505 "strip_size_kb": 64, 00:09:50.505 "state": "configuring", 00:09:50.505 "raid_level": "concat", 00:09:50.505 "superblock": true, 00:09:50.505 "num_base_bdevs": 3, 00:09:50.505 "num_base_bdevs_discovered": 1, 00:09:50.505 "num_base_bdevs_operational": 3, 00:09:50.505 "base_bdevs_list": [ 00:09:50.505 { 00:09:50.505 "name": "pt1", 00:09:50.505 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:50.505 "is_configured": true, 00:09:50.505 "data_offset": 2048, 00:09:50.505 "data_size": 63488 00:09:50.505 }, 00:09:50.505 { 00:09:50.505 "name": null, 00:09:50.505 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:50.505 "is_configured": false, 00:09:50.505 "data_offset": 0, 00:09:50.505 "data_size": 63488 00:09:50.505 }, 00:09:50.505 { 00:09:50.505 "name": null, 00:09:50.505 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:50.505 "is_configured": false, 00:09:50.505 "data_offset": 2048, 00:09:50.506 "data_size": 63488 00:09:50.506 } 00:09:50.506 ] 00:09:50.506 }' 00:09:50.506 15:18:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.506 15:18:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.765 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:50.765 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.765 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:50.765 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.765 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.765 [2024-11-10 15:18:57.084085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:50.765 [2024-11-10 15:18:57.084187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.765 [2024-11-10 15:18:57.084210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:50.765 [2024-11-10 15:18:57.084222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.765 [2024-11-10 15:18:57.084727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.766 [2024-11-10 15:18:57.084747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:50.766 [2024-11-10 15:18:57.084835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:50.766 [2024-11-10 15:18:57.084866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:50.766 pt2 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.766 [2024-11-10 15:18:57.095979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:50.766 [2024-11-10 15:18:57.096129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.766 [2024-11-10 15:18:57.096149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:50.766 [2024-11-10 15:18:57.096160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.766 [2024-11-10 15:18:57.096524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.766 [2024-11-10 15:18:57.096544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:50.766 [2024-11-10 15:18:57.096601] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:50.766 [2024-11-10 15:18:57.096633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:50.766 [2024-11-10 15:18:57.096727] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:09:50.766 [2024-11-10 15:18:57.096739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:50.766 [2024-11-10 15:18:57.096997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:09:50.766 [2024-11-10 15:18:57.097126] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:09:50.766 [2024-11-10 15:18:57.097134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:09:50.766 [2024-11-10 15:18:57.097237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.766 pt3 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.766 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.026 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.026 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.026 "name": "raid_bdev1", 00:09:51.026 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:51.026 "strip_size_kb": 64, 00:09:51.026 "state": "online", 00:09:51.026 "raid_level": "concat", 00:09:51.026 "superblock": true, 00:09:51.026 "num_base_bdevs": 3, 00:09:51.026 "num_base_bdevs_discovered": 3, 00:09:51.026 "num_base_bdevs_operational": 3, 00:09:51.026 "base_bdevs_list": [ 00:09:51.026 { 00:09:51.026 "name": "pt1", 00:09:51.026 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.026 "is_configured": true, 00:09:51.026 "data_offset": 2048, 00:09:51.026 "data_size": 63488 00:09:51.026 }, 00:09:51.026 { 00:09:51.026 "name": "pt2", 00:09:51.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.026 "is_configured": true, 00:09:51.026 "data_offset": 2048, 00:09:51.026 "data_size": 63488 00:09:51.026 }, 00:09:51.026 { 00:09:51.026 "name": "pt3", 00:09:51.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.026 "is_configured": true, 00:09:51.026 "data_offset": 2048, 00:09:51.026 "data_size": 63488 00:09:51.026 } 00:09:51.026 ] 00:09:51.026 }' 00:09:51.026 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.026 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.286 [2024-11-10 15:18:57.528500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:51.286 "name": "raid_bdev1", 00:09:51.286 "aliases": [ 00:09:51.286 "35c7c228-6ed9-4012-af2f-93298737cc31" 00:09:51.286 ], 00:09:51.286 "product_name": "Raid Volume", 00:09:51.286 "block_size": 512, 00:09:51.286 "num_blocks": 190464, 00:09:51.286 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:51.286 "assigned_rate_limits": { 00:09:51.286 "rw_ios_per_sec": 0, 00:09:51.286 "rw_mbytes_per_sec": 0, 00:09:51.286 "r_mbytes_per_sec": 0, 00:09:51.286 "w_mbytes_per_sec": 0 00:09:51.286 }, 00:09:51.286 "claimed": false, 00:09:51.286 "zoned": false, 00:09:51.286 "supported_io_types": { 00:09:51.286 "read": true, 00:09:51.286 "write": true, 00:09:51.286 "unmap": true, 00:09:51.286 "flush": true, 00:09:51.286 "reset": true, 00:09:51.286 "nvme_admin": false, 00:09:51.286 "nvme_io": false, 00:09:51.286 "nvme_io_md": false, 00:09:51.286 "write_zeroes": true, 00:09:51.286 "zcopy": false, 00:09:51.286 "get_zone_info": false, 00:09:51.286 "zone_management": false, 00:09:51.286 "zone_append": false, 00:09:51.286 "compare": false, 00:09:51.286 "compare_and_write": false, 00:09:51.286 "abort": false, 00:09:51.286 "seek_hole": false, 00:09:51.286 "seek_data": false, 00:09:51.286 "copy": false, 00:09:51.286 "nvme_iov_md": false 00:09:51.286 }, 00:09:51.286 "memory_domains": [ 00:09:51.286 { 00:09:51.286 "dma_device_id": "system", 00:09:51.286 "dma_device_type": 1 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.286 "dma_device_type": 2 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "dma_device_id": "system", 00:09:51.286 "dma_device_type": 1 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.286 "dma_device_type": 2 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "dma_device_id": "system", 00:09:51.286 "dma_device_type": 1 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.286 "dma_device_type": 2 00:09:51.286 } 00:09:51.286 ], 00:09:51.286 "driver_specific": { 00:09:51.286 "raid": { 00:09:51.286 "uuid": "35c7c228-6ed9-4012-af2f-93298737cc31", 00:09:51.286 "strip_size_kb": 64, 00:09:51.286 "state": "online", 00:09:51.286 "raid_level": "concat", 00:09:51.286 "superblock": true, 00:09:51.286 "num_base_bdevs": 3, 00:09:51.286 "num_base_bdevs_discovered": 3, 00:09:51.286 "num_base_bdevs_operational": 3, 00:09:51.286 "base_bdevs_list": [ 00:09:51.286 { 00:09:51.286 "name": "pt1", 00:09:51.286 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.286 "is_configured": true, 00:09:51.286 "data_offset": 2048, 00:09:51.286 "data_size": 63488 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "name": "pt2", 00:09:51.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.286 "is_configured": true, 00:09:51.286 "data_offset": 2048, 00:09:51.286 "data_size": 63488 00:09:51.286 }, 00:09:51.286 { 00:09:51.286 "name": "pt3", 00:09:51.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.286 "is_configured": true, 00:09:51.286 "data_offset": 2048, 00:09:51.286 "data_size": 63488 00:09:51.286 } 00:09:51.286 ] 00:09:51.286 } 00:09:51.286 } 00:09:51.286 }' 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:51.286 pt2 00:09:51.286 pt3' 00:09:51.286 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:51.545 [2024-11-10 15:18:57.824534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.545 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 35c7c228-6ed9-4012-af2f-93298737cc31 '!=' 35c7c228-6ed9-4012-af2f-93298737cc31 ']' 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79332 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 79332 ']' 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 79332 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79332 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79332' 00:09:51.546 killing process with pid 79332 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 79332 00:09:51.546 [2024-11-10 15:18:57.905963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:51.546 15:18:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 79332 00:09:51.546 [2024-11-10 15:18:57.906219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.546 [2024-11-10 15:18:57.906295] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.546 [2024-11-10 15:18:57.906310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:09:51.805 [2024-11-10 15:18:57.967357] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.065 15:18:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:52.065 00:09:52.065 real 0m4.190s 00:09:52.065 user 0m6.469s 00:09:52.065 sys 0m0.940s 00:09:52.065 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.065 15:18:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.065 ************************************ 00:09:52.065 END TEST raid_superblock_test 00:09:52.065 ************************************ 00:09:52.065 15:18:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:52.065 15:18:58 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:52.065 15:18:58 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.065 15:18:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.065 ************************************ 00:09:52.065 START TEST raid_read_error_test 00:09:52.065 ************************************ 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 read 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.I7GKwiv4Uk 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79575 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79575 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 79575 ']' 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:52.065 15:18:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.325 [2024-11-10 15:18:58.471229] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:52.325 [2024-11-10 15:18:58.471352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79575 ] 00:09:52.325 [2024-11-10 15:18:58.604541] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:52.325 [2024-11-10 15:18:58.644543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.584 [2024-11-10 15:18:58.687475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.584 [2024-11-10 15:18:58.763759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.584 [2024-11-10 15:18:58.763813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.154 BaseBdev1_malloc 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.154 true 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.154 [2024-11-10 15:18:59.342570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:53.154 [2024-11-10 15:18:59.342639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.154 [2024-11-10 15:18:59.342663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:53.154 [2024-11-10 15:18:59.342680] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.154 [2024-11-10 15:18:59.345242] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.154 [2024-11-10 15:18:59.345282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:53.154 BaseBdev1 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.154 BaseBdev2_malloc 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.154 true 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.154 [2024-11-10 15:18:59.389282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:53.154 [2024-11-10 15:18:59.389334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.154 [2024-11-10 15:18:59.389350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:53.154 [2024-11-10 15:18:59.389360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.154 [2024-11-10 15:18:59.391732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.154 [2024-11-10 15:18:59.391845] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:53.154 BaseBdev2 00:09:53.154 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.155 BaseBdev3_malloc 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.155 true 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.155 [2024-11-10 15:18:59.435776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:53.155 [2024-11-10 15:18:59.435896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.155 [2024-11-10 15:18:59.435917] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:53.155 [2024-11-10 15:18:59.435929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.155 [2024-11-10 15:18:59.438309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.155 [2024-11-10 15:18:59.438384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:53.155 BaseBdev3 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.155 [2024-11-10 15:18:59.447839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.155 [2024-11-10 15:18:59.450039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:53.155 [2024-11-10 15:18:59.450158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.155 [2024-11-10 15:18:59.450359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:53.155 [2024-11-10 15:18:59.450410] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.155 [2024-11-10 15:18:59.450677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:53.155 [2024-11-10 15:18:59.450847] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:53.155 [2024-11-10 15:18:59.450890] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:53.155 [2024-11-10 15:18:59.451056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.155 "name": "raid_bdev1", 00:09:53.155 "uuid": "f1c719b5-c41a-4cdb-9e77-d8a89b45f17a", 00:09:53.155 "strip_size_kb": 64, 00:09:53.155 "state": "online", 00:09:53.155 "raid_level": "concat", 00:09:53.155 "superblock": true, 00:09:53.155 "num_base_bdevs": 3, 00:09:53.155 "num_base_bdevs_discovered": 3, 00:09:53.155 "num_base_bdevs_operational": 3, 00:09:53.155 "base_bdevs_list": [ 00:09:53.155 { 00:09:53.155 "name": "BaseBdev1", 00:09:53.155 "uuid": "db52fd68-8880-5926-8a65-9d553f73b900", 00:09:53.155 "is_configured": true, 00:09:53.155 "data_offset": 2048, 00:09:53.155 "data_size": 63488 00:09:53.155 }, 00:09:53.155 { 00:09:53.155 "name": "BaseBdev2", 00:09:53.155 "uuid": "1ec74af9-ba28-5ba3-a8b9-35586954c901", 00:09:53.155 "is_configured": true, 00:09:53.155 "data_offset": 2048, 00:09:53.155 "data_size": 63488 00:09:53.155 }, 00:09:53.155 { 00:09:53.155 "name": "BaseBdev3", 00:09:53.155 "uuid": "18b460f9-2286-5f18-898d-4676689e3370", 00:09:53.155 "is_configured": true, 00:09:53.155 "data_offset": 2048, 00:09:53.155 "data_size": 63488 00:09:53.155 } 00:09:53.155 ] 00:09:53.155 }' 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.155 15:18:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.721 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:53.721 15:18:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.721 [2024-11-10 15:18:59.956561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.659 "name": "raid_bdev1", 00:09:54.659 "uuid": "f1c719b5-c41a-4cdb-9e77-d8a89b45f17a", 00:09:54.659 "strip_size_kb": 64, 00:09:54.659 "state": "online", 00:09:54.659 "raid_level": "concat", 00:09:54.659 "superblock": true, 00:09:54.659 "num_base_bdevs": 3, 00:09:54.659 "num_base_bdevs_discovered": 3, 00:09:54.659 "num_base_bdevs_operational": 3, 00:09:54.659 "base_bdevs_list": [ 00:09:54.659 { 00:09:54.659 "name": "BaseBdev1", 00:09:54.659 "uuid": "db52fd68-8880-5926-8a65-9d553f73b900", 00:09:54.659 "is_configured": true, 00:09:54.659 "data_offset": 2048, 00:09:54.659 "data_size": 63488 00:09:54.659 }, 00:09:54.659 { 00:09:54.659 "name": "BaseBdev2", 00:09:54.659 "uuid": "1ec74af9-ba28-5ba3-a8b9-35586954c901", 00:09:54.659 "is_configured": true, 00:09:54.659 "data_offset": 2048, 00:09:54.659 "data_size": 63488 00:09:54.659 }, 00:09:54.659 { 00:09:54.659 "name": "BaseBdev3", 00:09:54.659 "uuid": "18b460f9-2286-5f18-898d-4676689e3370", 00:09:54.659 "is_configured": true, 00:09:54.659 "data_offset": 2048, 00:09:54.659 "data_size": 63488 00:09:54.659 } 00:09:54.659 ] 00:09:54.659 }' 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.659 15:19:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.918 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.918 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.918 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.918 [2024-11-10 15:19:01.263628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.918 [2024-11-10 15:19:01.263778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.918 [2024-11-10 15:19:01.266318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.918 [2024-11-10 15:19:01.266411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.918 [2024-11-10 15:19:01.266478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.918 [2024-11-10 15:19:01.266533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:54.918 { 00:09:54.918 "results": [ 00:09:54.918 { 00:09:54.918 "job": "raid_bdev1", 00:09:54.918 "core_mask": "0x1", 00:09:54.918 "workload": "randrw", 00:09:54.918 "percentage": 50, 00:09:54.918 "status": "finished", 00:09:54.918 "queue_depth": 1, 00:09:54.918 "io_size": 131072, 00:09:54.918 "runtime": 1.304937, 00:09:54.918 "iops": 14746.305760354715, 00:09:54.918 "mibps": 1843.2882200443394, 00:09:54.918 "io_failed": 1, 00:09:54.918 "io_timeout": 0, 00:09:54.918 "avg_latency_us": 95.26063913752083, 00:09:54.918 "min_latency_us": 25.325546936285193, 00:09:54.918 "max_latency_us": 1470.889915453674 00:09:54.918 } 00:09:54.918 ], 00:09:54.918 "core_count": 1 00:09:54.918 } 00:09:54.918 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.919 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79575 00:09:54.919 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 79575 ']' 00:09:54.919 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 79575 00:09:54.919 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:09:54.919 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:54.919 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79575 00:09:55.178 killing process with pid 79575 00:09:55.178 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:55.178 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:55.178 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79575' 00:09:55.178 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 79575 00:09:55.178 [2024-11-10 15:19:01.314678] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.178 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 79575 00:09:55.178 [2024-11-10 15:19:01.362204] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.I7GKwiv4Uk 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.77 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.438 ************************************ 00:09:55.438 END TEST raid_read_error_test 00:09:55.438 ************************************ 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.77 != \0\.\0\0 ]] 00:09:55.438 00:09:55.438 real 0m3.330s 00:09:55.438 user 0m4.026s 00:09:55.438 sys 0m0.620s 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.438 15:19:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 15:19:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:55.438 15:19:01 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:55.438 15:19:01 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.438 15:19:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.438 ************************************ 00:09:55.438 START TEST raid_write_error_test 00:09:55.438 ************************************ 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 3 write 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OuADJYDMnG 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79710 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79710 00:09:55.438 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 79710 ']' 00:09:55.439 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.439 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:55.439 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.698 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:55.698 15:19:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.698 [2024-11-10 15:19:01.879330] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:55.698 [2024-11-10 15:19:01.879530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79710 ] 00:09:55.698 [2024-11-10 15:19:02.014089] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:55.698 [2024-11-10 15:19:02.053758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.958 [2024-11-10 15:19:02.095883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.958 [2024-11-10 15:19:02.173350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.958 [2024-11-10 15:19:02.173398] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.524 BaseBdev1_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.524 true 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.524 [2024-11-10 15:19:02.823509] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:56.524 [2024-11-10 15:19:02.823572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.524 [2024-11-10 15:19:02.823591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:56.524 [2024-11-10 15:19:02.823605] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.524 [2024-11-10 15:19:02.825857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.524 [2024-11-10 15:19:02.825900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:56.524 BaseBdev1 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.524 BaseBdev2_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.524 true 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.524 [2024-11-10 15:19:02.864235] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:56.524 [2024-11-10 15:19:02.864297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.524 [2024-11-10 15:19:02.864317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:56.524 [2024-11-10 15:19:02.864327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.524 [2024-11-10 15:19:02.866558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.524 [2024-11-10 15:19:02.866640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:56.524 BaseBdev2 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.524 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.783 BaseBdev3_malloc 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.783 true 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.783 [2024-11-10 15:19:02.905052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:56.783 [2024-11-10 15:19:02.905153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:56.783 [2024-11-10 15:19:02.905175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:56.783 [2024-11-10 15:19:02.905185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:56.783 [2024-11-10 15:19:02.907328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:56.783 [2024-11-10 15:19:02.907407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:56.783 BaseBdev3 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.783 [2024-11-10 15:19:02.917099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.783 [2024-11-10 15:19:02.918936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.783 [2024-11-10 15:19:02.919018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.783 [2024-11-10 15:19:02.919231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.783 [2024-11-10 15:19:02.919250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:56.783 [2024-11-10 15:19:02.919501] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:09:56.783 [2024-11-10 15:19:02.919634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.783 [2024-11-10 15:19:02.919645] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:56.783 [2024-11-10 15:19:02.919782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.783 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.783 "name": "raid_bdev1", 00:09:56.783 "uuid": "7424447b-2d14-489d-be14-0bc2801083d3", 00:09:56.783 "strip_size_kb": 64, 00:09:56.783 "state": "online", 00:09:56.783 "raid_level": "concat", 00:09:56.783 "superblock": true, 00:09:56.783 "num_base_bdevs": 3, 00:09:56.783 "num_base_bdevs_discovered": 3, 00:09:56.783 "num_base_bdevs_operational": 3, 00:09:56.783 "base_bdevs_list": [ 00:09:56.783 { 00:09:56.783 "name": "BaseBdev1", 00:09:56.783 "uuid": "4daf8156-315a-5f9c-8a9d-752e5c9452f4", 00:09:56.783 "is_configured": true, 00:09:56.783 "data_offset": 2048, 00:09:56.783 "data_size": 63488 00:09:56.783 }, 00:09:56.783 { 00:09:56.783 "name": "BaseBdev2", 00:09:56.783 "uuid": "ebad45cd-864a-539b-880f-8e2d9fa41a5a", 00:09:56.783 "is_configured": true, 00:09:56.783 "data_offset": 2048, 00:09:56.783 "data_size": 63488 00:09:56.783 }, 00:09:56.783 { 00:09:56.783 "name": "BaseBdev3", 00:09:56.783 "uuid": "b74135d5-48b1-5cf8-b0b1-d739a90a820c", 00:09:56.783 "is_configured": true, 00:09:56.784 "data_offset": 2048, 00:09:56.784 "data_size": 63488 00:09:56.784 } 00:09:56.784 ] 00:09:56.784 }' 00:09:56.784 15:19:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.784 15:19:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.043 15:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:57.043 15:19:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:57.302 [2024-11-10 15:19:03.429692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.240 "name": "raid_bdev1", 00:09:58.240 "uuid": "7424447b-2d14-489d-be14-0bc2801083d3", 00:09:58.240 "strip_size_kb": 64, 00:09:58.240 "state": "online", 00:09:58.240 "raid_level": "concat", 00:09:58.240 "superblock": true, 00:09:58.240 "num_base_bdevs": 3, 00:09:58.240 "num_base_bdevs_discovered": 3, 00:09:58.240 "num_base_bdevs_operational": 3, 00:09:58.240 "base_bdevs_list": [ 00:09:58.240 { 00:09:58.240 "name": "BaseBdev1", 00:09:58.240 "uuid": "4daf8156-315a-5f9c-8a9d-752e5c9452f4", 00:09:58.240 "is_configured": true, 00:09:58.240 "data_offset": 2048, 00:09:58.240 "data_size": 63488 00:09:58.240 }, 00:09:58.240 { 00:09:58.240 "name": "BaseBdev2", 00:09:58.240 "uuid": "ebad45cd-864a-539b-880f-8e2d9fa41a5a", 00:09:58.240 "is_configured": true, 00:09:58.240 "data_offset": 2048, 00:09:58.240 "data_size": 63488 00:09:58.240 }, 00:09:58.240 { 00:09:58.240 "name": "BaseBdev3", 00:09:58.240 "uuid": "b74135d5-48b1-5cf8-b0b1-d739a90a820c", 00:09:58.240 "is_configured": true, 00:09:58.240 "data_offset": 2048, 00:09:58.240 "data_size": 63488 00:09:58.240 } 00:09:58.240 ] 00:09:58.240 }' 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.240 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.500 [2024-11-10 15:19:04.796115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:58.500 [2024-11-10 15:19:04.796156] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.500 [2024-11-10 15:19:04.798730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.500 [2024-11-10 15:19:04.798843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.500 [2024-11-10 15:19:04.798890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:58.500 [2024-11-10 15:19:04.798900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:58.500 { 00:09:58.500 "results": [ 00:09:58.500 { 00:09:58.500 "job": "raid_bdev1", 00:09:58.500 "core_mask": "0x1", 00:09:58.500 "workload": "randrw", 00:09:58.500 "percentage": 50, 00:09:58.500 "status": "finished", 00:09:58.500 "queue_depth": 1, 00:09:58.500 "io_size": 131072, 00:09:58.500 "runtime": 1.36442, 00:09:58.500 "iops": 16630.5096671113, 00:09:58.500 "mibps": 2078.8137083889123, 00:09:58.500 "io_failed": 1, 00:09:58.500 "io_timeout": 0, 00:09:58.500 "avg_latency_us": 83.33904221268955, 00:09:58.500 "min_latency_us": 25.994944652662774, 00:09:58.500 "max_latency_us": 1406.6277346814259 00:09:58.500 } 00:09:58.500 ], 00:09:58.500 "core_count": 1 00:09:58.500 } 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79710 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 79710 ']' 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 79710 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79710 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:58.500 killing process with pid 79710 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79710' 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 79710 00:09:58.500 [2024-11-10 15:19:04.835720] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:58.500 15:19:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 79710 00:09:58.500 [2024-11-10 15:19:04.861102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OuADJYDMnG 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:58.760 00:09:58.760 real 0m3.306s 00:09:58.760 user 0m4.174s 00:09:58.760 sys 0m0.596s 00:09:58.760 ************************************ 00:09:58.760 END TEST raid_write_error_test 00:09:58.760 ************************************ 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.760 15:19:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.019 15:19:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:59.019 15:19:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:59.019 15:19:05 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:59.019 15:19:05 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:59.019 15:19:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.019 ************************************ 00:09:59.019 START TEST raid_state_function_test 00:09:59.019 ************************************ 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 false 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79841 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79841' 00:09:59.019 Process raid pid: 79841 00:09:59.019 15:19:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79841 00:09:59.020 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 79841 ']' 00:09:59.020 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.020 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:59.020 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.020 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:59.020 15:19:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.020 [2024-11-10 15:19:05.249378] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:09:59.020 [2024-11-10 15:19:05.249501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.279 [2024-11-10 15:19:05.381921] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:59.279 [2024-11-10 15:19:05.409879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.279 [2024-11-10 15:19:05.435699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.279 [2024-11-10 15:19:05.478216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.279 [2024-11-10 15:19:05.478257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.848 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.848 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:09:59.848 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.848 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.848 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.848 [2024-11-10 15:19:06.096771] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:59.848 [2024-11-10 15:19:06.096839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:59.848 [2024-11-10 15:19:06.096852] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.848 [2024-11-10 15:19:06.096860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.849 [2024-11-10 15:19:06.096870] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.849 [2024-11-10 15:19:06.096877] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.849 "name": "Existed_Raid", 00:09:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.849 "strip_size_kb": 0, 00:09:59.849 "state": "configuring", 00:09:59.849 "raid_level": "raid1", 00:09:59.849 "superblock": false, 00:09:59.849 "num_base_bdevs": 3, 00:09:59.849 "num_base_bdevs_discovered": 0, 00:09:59.849 "num_base_bdevs_operational": 3, 00:09:59.849 "base_bdevs_list": [ 00:09:59.849 { 00:09:59.849 "name": "BaseBdev1", 00:09:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.849 "is_configured": false, 00:09:59.849 "data_offset": 0, 00:09:59.849 "data_size": 0 00:09:59.849 }, 00:09:59.849 { 00:09:59.849 "name": "BaseBdev2", 00:09:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.849 "is_configured": false, 00:09:59.849 "data_offset": 0, 00:09:59.849 "data_size": 0 00:09:59.849 }, 00:09:59.849 { 00:09:59.849 "name": "BaseBdev3", 00:09:59.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.849 "is_configured": false, 00:09:59.849 "data_offset": 0, 00:09:59.849 "data_size": 0 00:09:59.849 } 00:09:59.849 ] 00:09:59.849 }' 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.849 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 [2024-11-10 15:19:06.572797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.420 [2024-11-10 15:19:06.572842] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 [2024-11-10 15:19:06.584830] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.420 [2024-11-10 15:19:06.584887] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.420 [2024-11-10 15:19:06.584899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.420 [2024-11-10 15:19:06.584923] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.420 [2024-11-10 15:19:06.584931] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.420 [2024-11-10 15:19:06.584938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 [2024-11-10 15:19:06.605829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.420 BaseBdev1 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 [ 00:10:00.420 { 00:10:00.420 "name": "BaseBdev1", 00:10:00.420 "aliases": [ 00:10:00.420 "2f58d6c5-4ea3-4036-ab2f-2081af3d5209" 00:10:00.420 ], 00:10:00.420 "product_name": "Malloc disk", 00:10:00.420 "block_size": 512, 00:10:00.420 "num_blocks": 65536, 00:10:00.420 "uuid": "2f58d6c5-4ea3-4036-ab2f-2081af3d5209", 00:10:00.420 "assigned_rate_limits": { 00:10:00.420 "rw_ios_per_sec": 0, 00:10:00.420 "rw_mbytes_per_sec": 0, 00:10:00.420 "r_mbytes_per_sec": 0, 00:10:00.420 "w_mbytes_per_sec": 0 00:10:00.420 }, 00:10:00.420 "claimed": true, 00:10:00.420 "claim_type": "exclusive_write", 00:10:00.420 "zoned": false, 00:10:00.420 "supported_io_types": { 00:10:00.420 "read": true, 00:10:00.420 "write": true, 00:10:00.420 "unmap": true, 00:10:00.420 "flush": true, 00:10:00.420 "reset": true, 00:10:00.420 "nvme_admin": false, 00:10:00.420 "nvme_io": false, 00:10:00.420 "nvme_io_md": false, 00:10:00.420 "write_zeroes": true, 00:10:00.420 "zcopy": true, 00:10:00.420 "get_zone_info": false, 00:10:00.420 "zone_management": false, 00:10:00.420 "zone_append": false, 00:10:00.420 "compare": false, 00:10:00.420 "compare_and_write": false, 00:10:00.420 "abort": true, 00:10:00.420 "seek_hole": false, 00:10:00.420 "seek_data": false, 00:10:00.420 "copy": true, 00:10:00.420 "nvme_iov_md": false 00:10:00.420 }, 00:10:00.420 "memory_domains": [ 00:10:00.420 { 00:10:00.420 "dma_device_id": "system", 00:10:00.420 "dma_device_type": 1 00:10:00.420 }, 00:10:00.420 { 00:10:00.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.420 "dma_device_type": 2 00:10:00.420 } 00:10:00.420 ], 00:10:00.420 "driver_specific": {} 00:10:00.420 } 00:10:00.420 ] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.420 "name": "Existed_Raid", 00:10:00.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.420 "strip_size_kb": 0, 00:10:00.420 "state": "configuring", 00:10:00.420 "raid_level": "raid1", 00:10:00.420 "superblock": false, 00:10:00.420 "num_base_bdevs": 3, 00:10:00.420 "num_base_bdevs_discovered": 1, 00:10:00.420 "num_base_bdevs_operational": 3, 00:10:00.420 "base_bdevs_list": [ 00:10:00.420 { 00:10:00.420 "name": "BaseBdev1", 00:10:00.420 "uuid": "2f58d6c5-4ea3-4036-ab2f-2081af3d5209", 00:10:00.420 "is_configured": true, 00:10:00.420 "data_offset": 0, 00:10:00.420 "data_size": 65536 00:10:00.420 }, 00:10:00.420 { 00:10:00.420 "name": "BaseBdev2", 00:10:00.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.420 "is_configured": false, 00:10:00.420 "data_offset": 0, 00:10:00.420 "data_size": 0 00:10:00.420 }, 00:10:00.420 { 00:10:00.420 "name": "BaseBdev3", 00:10:00.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.420 "is_configured": false, 00:10:00.420 "data_offset": 0, 00:10:00.420 "data_size": 0 00:10:00.420 } 00:10:00.420 ] 00:10:00.420 }' 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.420 15:19:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.990 [2024-11-10 15:19:07.081988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:00.990 [2024-11-10 15:19:07.082142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.990 [2024-11-10 15:19:07.090012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:00.990 [2024-11-10 15:19:07.091866] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.990 [2024-11-10 15:19:07.091908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.990 [2024-11-10 15:19:07.091922] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:00.990 [2024-11-10 15:19:07.091930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.990 "name": "Existed_Raid", 00:10:00.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.990 "strip_size_kb": 0, 00:10:00.990 "state": "configuring", 00:10:00.990 "raid_level": "raid1", 00:10:00.990 "superblock": false, 00:10:00.990 "num_base_bdevs": 3, 00:10:00.990 "num_base_bdevs_discovered": 1, 00:10:00.990 "num_base_bdevs_operational": 3, 00:10:00.990 "base_bdevs_list": [ 00:10:00.990 { 00:10:00.990 "name": "BaseBdev1", 00:10:00.990 "uuid": "2f58d6c5-4ea3-4036-ab2f-2081af3d5209", 00:10:00.990 "is_configured": true, 00:10:00.990 "data_offset": 0, 00:10:00.990 "data_size": 65536 00:10:00.990 }, 00:10:00.990 { 00:10:00.990 "name": "BaseBdev2", 00:10:00.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.990 "is_configured": false, 00:10:00.990 "data_offset": 0, 00:10:00.990 "data_size": 0 00:10:00.990 }, 00:10:00.990 { 00:10:00.990 "name": "BaseBdev3", 00:10:00.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.990 "is_configured": false, 00:10:00.990 "data_offset": 0, 00:10:00.990 "data_size": 0 00:10:00.990 } 00:10:00.990 ] 00:10:00.990 }' 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.990 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.250 [2024-11-10 15:19:07.533333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.250 BaseBdev2 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.250 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.250 [ 00:10:01.250 { 00:10:01.250 "name": "BaseBdev2", 00:10:01.250 "aliases": [ 00:10:01.250 "a982f1a5-74c7-4040-a18d-d2b855f3be30" 00:10:01.250 ], 00:10:01.250 "product_name": "Malloc disk", 00:10:01.250 "block_size": 512, 00:10:01.250 "num_blocks": 65536, 00:10:01.250 "uuid": "a982f1a5-74c7-4040-a18d-d2b855f3be30", 00:10:01.250 "assigned_rate_limits": { 00:10:01.250 "rw_ios_per_sec": 0, 00:10:01.251 "rw_mbytes_per_sec": 0, 00:10:01.251 "r_mbytes_per_sec": 0, 00:10:01.251 "w_mbytes_per_sec": 0 00:10:01.251 }, 00:10:01.251 "claimed": true, 00:10:01.251 "claim_type": "exclusive_write", 00:10:01.251 "zoned": false, 00:10:01.251 "supported_io_types": { 00:10:01.251 "read": true, 00:10:01.251 "write": true, 00:10:01.251 "unmap": true, 00:10:01.251 "flush": true, 00:10:01.251 "reset": true, 00:10:01.251 "nvme_admin": false, 00:10:01.251 "nvme_io": false, 00:10:01.251 "nvme_io_md": false, 00:10:01.251 "write_zeroes": true, 00:10:01.251 "zcopy": true, 00:10:01.251 "get_zone_info": false, 00:10:01.251 "zone_management": false, 00:10:01.251 "zone_append": false, 00:10:01.251 "compare": false, 00:10:01.251 "compare_and_write": false, 00:10:01.251 "abort": true, 00:10:01.251 "seek_hole": false, 00:10:01.251 "seek_data": false, 00:10:01.251 "copy": true, 00:10:01.251 "nvme_iov_md": false 00:10:01.251 }, 00:10:01.251 "memory_domains": [ 00:10:01.251 { 00:10:01.251 "dma_device_id": "system", 00:10:01.251 "dma_device_type": 1 00:10:01.251 }, 00:10:01.251 { 00:10:01.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.251 "dma_device_type": 2 00:10:01.251 } 00:10:01.251 ], 00:10:01.251 "driver_specific": {} 00:10:01.251 } 00:10:01.251 ] 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.251 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.511 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.511 "name": "Existed_Raid", 00:10:01.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.511 "strip_size_kb": 0, 00:10:01.511 "state": "configuring", 00:10:01.511 "raid_level": "raid1", 00:10:01.511 "superblock": false, 00:10:01.511 "num_base_bdevs": 3, 00:10:01.511 "num_base_bdevs_discovered": 2, 00:10:01.511 "num_base_bdevs_operational": 3, 00:10:01.511 "base_bdevs_list": [ 00:10:01.511 { 00:10:01.511 "name": "BaseBdev1", 00:10:01.511 "uuid": "2f58d6c5-4ea3-4036-ab2f-2081af3d5209", 00:10:01.511 "is_configured": true, 00:10:01.511 "data_offset": 0, 00:10:01.511 "data_size": 65536 00:10:01.511 }, 00:10:01.511 { 00:10:01.511 "name": "BaseBdev2", 00:10:01.511 "uuid": "a982f1a5-74c7-4040-a18d-d2b855f3be30", 00:10:01.511 "is_configured": true, 00:10:01.511 "data_offset": 0, 00:10:01.511 "data_size": 65536 00:10:01.511 }, 00:10:01.511 { 00:10:01.511 "name": "BaseBdev3", 00:10:01.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.511 "is_configured": false, 00:10:01.511 "data_offset": 0, 00:10:01.511 "data_size": 0 00:10:01.511 } 00:10:01.511 ] 00:10:01.511 }' 00:10:01.511 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.511 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 15:19:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:01.771 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.771 15:19:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 [2024-11-10 15:19:08.016530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.771 [2024-11-10 15:19:08.016668] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:01.771 [2024-11-10 15:19:08.016714] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:01.771 [2024-11-10 15:19:08.017142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:01.771 [2024-11-10 15:19:08.017355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:01.771 [2024-11-10 15:19:08.017411] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:01.771 [2024-11-10 15:19:08.017680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:01.771 BaseBdev3 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.771 [ 00:10:01.771 { 00:10:01.771 "name": "BaseBdev3", 00:10:01.771 "aliases": [ 00:10:01.771 "c041fb39-72dd-4572-96bf-089d0a09fdd1" 00:10:01.771 ], 00:10:01.771 "product_name": "Malloc disk", 00:10:01.771 "block_size": 512, 00:10:01.771 "num_blocks": 65536, 00:10:01.771 "uuid": "c041fb39-72dd-4572-96bf-089d0a09fdd1", 00:10:01.771 "assigned_rate_limits": { 00:10:01.771 "rw_ios_per_sec": 0, 00:10:01.771 "rw_mbytes_per_sec": 0, 00:10:01.771 "r_mbytes_per_sec": 0, 00:10:01.771 "w_mbytes_per_sec": 0 00:10:01.771 }, 00:10:01.771 "claimed": true, 00:10:01.771 "claim_type": "exclusive_write", 00:10:01.771 "zoned": false, 00:10:01.771 "supported_io_types": { 00:10:01.771 "read": true, 00:10:01.771 "write": true, 00:10:01.771 "unmap": true, 00:10:01.771 "flush": true, 00:10:01.771 "reset": true, 00:10:01.771 "nvme_admin": false, 00:10:01.771 "nvme_io": false, 00:10:01.771 "nvme_io_md": false, 00:10:01.771 "write_zeroes": true, 00:10:01.771 "zcopy": true, 00:10:01.771 "get_zone_info": false, 00:10:01.771 "zone_management": false, 00:10:01.771 "zone_append": false, 00:10:01.771 "compare": false, 00:10:01.771 "compare_and_write": false, 00:10:01.771 "abort": true, 00:10:01.771 "seek_hole": false, 00:10:01.771 "seek_data": false, 00:10:01.771 "copy": true, 00:10:01.771 "nvme_iov_md": false 00:10:01.771 }, 00:10:01.771 "memory_domains": [ 00:10:01.771 { 00:10:01.771 "dma_device_id": "system", 00:10:01.771 "dma_device_type": 1 00:10:01.771 }, 00:10:01.771 { 00:10:01.771 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.771 "dma_device_type": 2 00:10:01.771 } 00:10:01.771 ], 00:10:01.771 "driver_specific": {} 00:10:01.771 } 00:10:01.771 ] 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:01.771 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.772 "name": "Existed_Raid", 00:10:01.772 "uuid": "1e6847b7-8fd1-4bbf-a11f-720596bbd62c", 00:10:01.772 "strip_size_kb": 0, 00:10:01.772 "state": "online", 00:10:01.772 "raid_level": "raid1", 00:10:01.772 "superblock": false, 00:10:01.772 "num_base_bdevs": 3, 00:10:01.772 "num_base_bdevs_discovered": 3, 00:10:01.772 "num_base_bdevs_operational": 3, 00:10:01.772 "base_bdevs_list": [ 00:10:01.772 { 00:10:01.772 "name": "BaseBdev1", 00:10:01.772 "uuid": "2f58d6c5-4ea3-4036-ab2f-2081af3d5209", 00:10:01.772 "is_configured": true, 00:10:01.772 "data_offset": 0, 00:10:01.772 "data_size": 65536 00:10:01.772 }, 00:10:01.772 { 00:10:01.772 "name": "BaseBdev2", 00:10:01.772 "uuid": "a982f1a5-74c7-4040-a18d-d2b855f3be30", 00:10:01.772 "is_configured": true, 00:10:01.772 "data_offset": 0, 00:10:01.772 "data_size": 65536 00:10:01.772 }, 00:10:01.772 { 00:10:01.772 "name": "BaseBdev3", 00:10:01.772 "uuid": "c041fb39-72dd-4572-96bf-089d0a09fdd1", 00:10:01.772 "is_configured": true, 00:10:01.772 "data_offset": 0, 00:10:01.772 "data_size": 65536 00:10:01.772 } 00:10:01.772 ] 00:10:01.772 }' 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.772 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.341 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.341 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.341 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.341 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.341 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.341 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 [2024-11-10 15:19:08.481036] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.342 "name": "Existed_Raid", 00:10:02.342 "aliases": [ 00:10:02.342 "1e6847b7-8fd1-4bbf-a11f-720596bbd62c" 00:10:02.342 ], 00:10:02.342 "product_name": "Raid Volume", 00:10:02.342 "block_size": 512, 00:10:02.342 "num_blocks": 65536, 00:10:02.342 "uuid": "1e6847b7-8fd1-4bbf-a11f-720596bbd62c", 00:10:02.342 "assigned_rate_limits": { 00:10:02.342 "rw_ios_per_sec": 0, 00:10:02.342 "rw_mbytes_per_sec": 0, 00:10:02.342 "r_mbytes_per_sec": 0, 00:10:02.342 "w_mbytes_per_sec": 0 00:10:02.342 }, 00:10:02.342 "claimed": false, 00:10:02.342 "zoned": false, 00:10:02.342 "supported_io_types": { 00:10:02.342 "read": true, 00:10:02.342 "write": true, 00:10:02.342 "unmap": false, 00:10:02.342 "flush": false, 00:10:02.342 "reset": true, 00:10:02.342 "nvme_admin": false, 00:10:02.342 "nvme_io": false, 00:10:02.342 "nvme_io_md": false, 00:10:02.342 "write_zeroes": true, 00:10:02.342 "zcopy": false, 00:10:02.342 "get_zone_info": false, 00:10:02.342 "zone_management": false, 00:10:02.342 "zone_append": false, 00:10:02.342 "compare": false, 00:10:02.342 "compare_and_write": false, 00:10:02.342 "abort": false, 00:10:02.342 "seek_hole": false, 00:10:02.342 "seek_data": false, 00:10:02.342 "copy": false, 00:10:02.342 "nvme_iov_md": false 00:10:02.342 }, 00:10:02.342 "memory_domains": [ 00:10:02.342 { 00:10:02.342 "dma_device_id": "system", 00:10:02.342 "dma_device_type": 1 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.342 "dma_device_type": 2 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "dma_device_id": "system", 00:10:02.342 "dma_device_type": 1 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.342 "dma_device_type": 2 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "dma_device_id": "system", 00:10:02.342 "dma_device_type": 1 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.342 "dma_device_type": 2 00:10:02.342 } 00:10:02.342 ], 00:10:02.342 "driver_specific": { 00:10:02.342 "raid": { 00:10:02.342 "uuid": "1e6847b7-8fd1-4bbf-a11f-720596bbd62c", 00:10:02.342 "strip_size_kb": 0, 00:10:02.342 "state": "online", 00:10:02.342 "raid_level": "raid1", 00:10:02.342 "superblock": false, 00:10:02.342 "num_base_bdevs": 3, 00:10:02.342 "num_base_bdevs_discovered": 3, 00:10:02.342 "num_base_bdevs_operational": 3, 00:10:02.342 "base_bdevs_list": [ 00:10:02.342 { 00:10:02.342 "name": "BaseBdev1", 00:10:02.342 "uuid": "2f58d6c5-4ea3-4036-ab2f-2081af3d5209", 00:10:02.342 "is_configured": true, 00:10:02.342 "data_offset": 0, 00:10:02.342 "data_size": 65536 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "name": "BaseBdev2", 00:10:02.342 "uuid": "a982f1a5-74c7-4040-a18d-d2b855f3be30", 00:10:02.342 "is_configured": true, 00:10:02.342 "data_offset": 0, 00:10:02.342 "data_size": 65536 00:10:02.342 }, 00:10:02.342 { 00:10:02.342 "name": "BaseBdev3", 00:10:02.342 "uuid": "c041fb39-72dd-4572-96bf-089d0a09fdd1", 00:10:02.342 "is_configured": true, 00:10:02.342 "data_offset": 0, 00:10:02.342 "data_size": 65536 00:10:02.342 } 00:10:02.342 ] 00:10:02.342 } 00:10:02.342 } 00:10:02.342 }' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:02.342 BaseBdev2 00:10:02.342 BaseBdev3' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.342 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.342 [2024-11-10 15:19:08.700911] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.602 "name": "Existed_Raid", 00:10:02.602 "uuid": "1e6847b7-8fd1-4bbf-a11f-720596bbd62c", 00:10:02.602 "strip_size_kb": 0, 00:10:02.602 "state": "online", 00:10:02.602 "raid_level": "raid1", 00:10:02.602 "superblock": false, 00:10:02.602 "num_base_bdevs": 3, 00:10:02.602 "num_base_bdevs_discovered": 2, 00:10:02.602 "num_base_bdevs_operational": 2, 00:10:02.602 "base_bdevs_list": [ 00:10:02.602 { 00:10:02.602 "name": null, 00:10:02.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.602 "is_configured": false, 00:10:02.602 "data_offset": 0, 00:10:02.602 "data_size": 65536 00:10:02.602 }, 00:10:02.602 { 00:10:02.602 "name": "BaseBdev2", 00:10:02.602 "uuid": "a982f1a5-74c7-4040-a18d-d2b855f3be30", 00:10:02.602 "is_configured": true, 00:10:02.602 "data_offset": 0, 00:10:02.602 "data_size": 65536 00:10:02.602 }, 00:10:02.602 { 00:10:02.602 "name": "BaseBdev3", 00:10:02.602 "uuid": "c041fb39-72dd-4572-96bf-089d0a09fdd1", 00:10:02.602 "is_configured": true, 00:10:02.602 "data_offset": 0, 00:10:02.602 "data_size": 65536 00:10:02.602 } 00:10:02.602 ] 00:10:02.602 }' 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.602 15:19:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.862 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 [2024-11-10 15:19:09.228879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.122 [2024-11-10 15:19:09.300378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.122 [2024-11-10 15:19:09.300491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.122 [2024-11-10 15:19:09.312229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.122 [2024-11-10 15:19:09.312287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.122 [2024-11-10 15:19:09.312300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:03.122 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 BaseBdev2 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 [ 00:10:03.123 { 00:10:03.123 "name": "BaseBdev2", 00:10:03.123 "aliases": [ 00:10:03.123 "7b2ea7e5-88ba-467c-a1de-97cdb339be53" 00:10:03.123 ], 00:10:03.123 "product_name": "Malloc disk", 00:10:03.123 "block_size": 512, 00:10:03.123 "num_blocks": 65536, 00:10:03.123 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:03.123 "assigned_rate_limits": { 00:10:03.123 "rw_ios_per_sec": 0, 00:10:03.123 "rw_mbytes_per_sec": 0, 00:10:03.123 "r_mbytes_per_sec": 0, 00:10:03.123 "w_mbytes_per_sec": 0 00:10:03.123 }, 00:10:03.123 "claimed": false, 00:10:03.123 "zoned": false, 00:10:03.123 "supported_io_types": { 00:10:03.123 "read": true, 00:10:03.123 "write": true, 00:10:03.123 "unmap": true, 00:10:03.123 "flush": true, 00:10:03.123 "reset": true, 00:10:03.123 "nvme_admin": false, 00:10:03.123 "nvme_io": false, 00:10:03.123 "nvme_io_md": false, 00:10:03.123 "write_zeroes": true, 00:10:03.123 "zcopy": true, 00:10:03.123 "get_zone_info": false, 00:10:03.123 "zone_management": false, 00:10:03.123 "zone_append": false, 00:10:03.123 "compare": false, 00:10:03.123 "compare_and_write": false, 00:10:03.123 "abort": true, 00:10:03.123 "seek_hole": false, 00:10:03.123 "seek_data": false, 00:10:03.123 "copy": true, 00:10:03.123 "nvme_iov_md": false 00:10:03.123 }, 00:10:03.123 "memory_domains": [ 00:10:03.123 { 00:10:03.123 "dma_device_id": "system", 00:10:03.123 "dma_device_type": 1 00:10:03.123 }, 00:10:03.123 { 00:10:03.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.123 "dma_device_type": 2 00:10:03.123 } 00:10:03.123 ], 00:10:03.123 "driver_specific": {} 00:10:03.123 } 00:10:03.123 ] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 BaseBdev3 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 [ 00:10:03.123 { 00:10:03.123 "name": "BaseBdev3", 00:10:03.123 "aliases": [ 00:10:03.123 "850dd896-bde6-4904-b174-d499c717fa53" 00:10:03.123 ], 00:10:03.123 "product_name": "Malloc disk", 00:10:03.123 "block_size": 512, 00:10:03.123 "num_blocks": 65536, 00:10:03.123 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:03.123 "assigned_rate_limits": { 00:10:03.123 "rw_ios_per_sec": 0, 00:10:03.123 "rw_mbytes_per_sec": 0, 00:10:03.123 "r_mbytes_per_sec": 0, 00:10:03.123 "w_mbytes_per_sec": 0 00:10:03.123 }, 00:10:03.123 "claimed": false, 00:10:03.123 "zoned": false, 00:10:03.123 "supported_io_types": { 00:10:03.123 "read": true, 00:10:03.123 "write": true, 00:10:03.123 "unmap": true, 00:10:03.123 "flush": true, 00:10:03.123 "reset": true, 00:10:03.123 "nvme_admin": false, 00:10:03.123 "nvme_io": false, 00:10:03.123 "nvme_io_md": false, 00:10:03.123 "write_zeroes": true, 00:10:03.123 "zcopy": true, 00:10:03.123 "get_zone_info": false, 00:10:03.123 "zone_management": false, 00:10:03.123 "zone_append": false, 00:10:03.123 "compare": false, 00:10:03.123 "compare_and_write": false, 00:10:03.123 "abort": true, 00:10:03.123 "seek_hole": false, 00:10:03.123 "seek_data": false, 00:10:03.123 "copy": true, 00:10:03.123 "nvme_iov_md": false 00:10:03.123 }, 00:10:03.123 "memory_domains": [ 00:10:03.123 { 00:10:03.123 "dma_device_id": "system", 00:10:03.123 "dma_device_type": 1 00:10:03.123 }, 00:10:03.123 { 00:10:03.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.123 "dma_device_type": 2 00:10:03.123 } 00:10:03.123 ], 00:10:03.123 "driver_specific": {} 00:10:03.123 } 00:10:03.123 ] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.123 [2024-11-10 15:19:09.473578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.123 [2024-11-10 15:19:09.473626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.123 [2024-11-10 15:19:09.473664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.123 [2024-11-10 15:19:09.475723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.123 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.383 "name": "Existed_Raid", 00:10:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.383 "strip_size_kb": 0, 00:10:03.383 "state": "configuring", 00:10:03.383 "raid_level": "raid1", 00:10:03.383 "superblock": false, 00:10:03.383 "num_base_bdevs": 3, 00:10:03.383 "num_base_bdevs_discovered": 2, 00:10:03.383 "num_base_bdevs_operational": 3, 00:10:03.383 "base_bdevs_list": [ 00:10:03.383 { 00:10:03.383 "name": "BaseBdev1", 00:10:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.383 "is_configured": false, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 0 00:10:03.383 }, 00:10:03.383 { 00:10:03.383 "name": "BaseBdev2", 00:10:03.383 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:03.383 "is_configured": true, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 65536 00:10:03.383 }, 00:10:03.383 { 00:10:03.383 "name": "BaseBdev3", 00:10:03.383 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:03.383 "is_configured": true, 00:10:03.383 "data_offset": 0, 00:10:03.383 "data_size": 65536 00:10:03.383 } 00:10:03.383 ] 00:10:03.383 }' 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.383 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.642 [2024-11-10 15:19:09.869686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.642 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.643 "name": "Existed_Raid", 00:10:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.643 "strip_size_kb": 0, 00:10:03.643 "state": "configuring", 00:10:03.643 "raid_level": "raid1", 00:10:03.643 "superblock": false, 00:10:03.643 "num_base_bdevs": 3, 00:10:03.643 "num_base_bdevs_discovered": 1, 00:10:03.643 "num_base_bdevs_operational": 3, 00:10:03.643 "base_bdevs_list": [ 00:10:03.643 { 00:10:03.643 "name": "BaseBdev1", 00:10:03.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.643 "is_configured": false, 00:10:03.643 "data_offset": 0, 00:10:03.643 "data_size": 0 00:10:03.643 }, 00:10:03.643 { 00:10:03.643 "name": null, 00:10:03.643 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:03.643 "is_configured": false, 00:10:03.643 "data_offset": 0, 00:10:03.643 "data_size": 65536 00:10:03.643 }, 00:10:03.643 { 00:10:03.643 "name": "BaseBdev3", 00:10:03.643 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:03.643 "is_configured": true, 00:10:03.643 "data_offset": 0, 00:10:03.643 "data_size": 65536 00:10:03.643 } 00:10:03.643 ] 00:10:03.643 }' 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.643 15:19:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.236 [2024-11-10 15:19:10.340640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:04.236 BaseBdev1 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:04.236 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.237 [ 00:10:04.237 { 00:10:04.237 "name": "BaseBdev1", 00:10:04.237 "aliases": [ 00:10:04.237 "612f9607-54bc-4893-84d0-f72aa3192371" 00:10:04.237 ], 00:10:04.237 "product_name": "Malloc disk", 00:10:04.237 "block_size": 512, 00:10:04.237 "num_blocks": 65536, 00:10:04.237 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:04.237 "assigned_rate_limits": { 00:10:04.237 "rw_ios_per_sec": 0, 00:10:04.237 "rw_mbytes_per_sec": 0, 00:10:04.237 "r_mbytes_per_sec": 0, 00:10:04.237 "w_mbytes_per_sec": 0 00:10:04.237 }, 00:10:04.237 "claimed": true, 00:10:04.237 "claim_type": "exclusive_write", 00:10:04.237 "zoned": false, 00:10:04.237 "supported_io_types": { 00:10:04.237 "read": true, 00:10:04.237 "write": true, 00:10:04.237 "unmap": true, 00:10:04.237 "flush": true, 00:10:04.237 "reset": true, 00:10:04.237 "nvme_admin": false, 00:10:04.237 "nvme_io": false, 00:10:04.237 "nvme_io_md": false, 00:10:04.237 "write_zeroes": true, 00:10:04.237 "zcopy": true, 00:10:04.237 "get_zone_info": false, 00:10:04.237 "zone_management": false, 00:10:04.237 "zone_append": false, 00:10:04.237 "compare": false, 00:10:04.237 "compare_and_write": false, 00:10:04.237 "abort": true, 00:10:04.237 "seek_hole": false, 00:10:04.237 "seek_data": false, 00:10:04.237 "copy": true, 00:10:04.237 "nvme_iov_md": false 00:10:04.237 }, 00:10:04.237 "memory_domains": [ 00:10:04.237 { 00:10:04.237 "dma_device_id": "system", 00:10:04.237 "dma_device_type": 1 00:10:04.237 }, 00:10:04.237 { 00:10:04.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.237 "dma_device_type": 2 00:10:04.237 } 00:10:04.237 ], 00:10:04.237 "driver_specific": {} 00:10:04.237 } 00:10:04.237 ] 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.237 "name": "Existed_Raid", 00:10:04.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.237 "strip_size_kb": 0, 00:10:04.237 "state": "configuring", 00:10:04.237 "raid_level": "raid1", 00:10:04.237 "superblock": false, 00:10:04.237 "num_base_bdevs": 3, 00:10:04.237 "num_base_bdevs_discovered": 2, 00:10:04.237 "num_base_bdevs_operational": 3, 00:10:04.237 "base_bdevs_list": [ 00:10:04.237 { 00:10:04.237 "name": "BaseBdev1", 00:10:04.237 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:04.237 "is_configured": true, 00:10:04.237 "data_offset": 0, 00:10:04.237 "data_size": 65536 00:10:04.237 }, 00:10:04.237 { 00:10:04.237 "name": null, 00:10:04.237 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:04.237 "is_configured": false, 00:10:04.237 "data_offset": 0, 00:10:04.237 "data_size": 65536 00:10:04.237 }, 00:10:04.237 { 00:10:04.237 "name": "BaseBdev3", 00:10:04.237 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:04.237 "is_configured": true, 00:10:04.237 "data_offset": 0, 00:10:04.237 "data_size": 65536 00:10:04.237 } 00:10:04.237 ] 00:10:04.237 }' 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.237 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.497 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.497 [2024-11-10 15:19:10.856855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.755 "name": "Existed_Raid", 00:10:04.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.755 "strip_size_kb": 0, 00:10:04.755 "state": "configuring", 00:10:04.755 "raid_level": "raid1", 00:10:04.755 "superblock": false, 00:10:04.755 "num_base_bdevs": 3, 00:10:04.755 "num_base_bdevs_discovered": 1, 00:10:04.755 "num_base_bdevs_operational": 3, 00:10:04.755 "base_bdevs_list": [ 00:10:04.755 { 00:10:04.755 "name": "BaseBdev1", 00:10:04.755 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:04.755 "is_configured": true, 00:10:04.755 "data_offset": 0, 00:10:04.755 "data_size": 65536 00:10:04.755 }, 00:10:04.755 { 00:10:04.755 "name": null, 00:10:04.755 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:04.755 "is_configured": false, 00:10:04.755 "data_offset": 0, 00:10:04.755 "data_size": 65536 00:10:04.755 }, 00:10:04.755 { 00:10:04.755 "name": null, 00:10:04.755 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:04.755 "is_configured": false, 00:10:04.755 "data_offset": 0, 00:10:04.755 "data_size": 65536 00:10:04.755 } 00:10:04.755 ] 00:10:04.755 }' 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.755 15:19:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.014 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.014 [2024-11-10 15:19:11.373037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.274 "name": "Existed_Raid", 00:10:05.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.274 "strip_size_kb": 0, 00:10:05.274 "state": "configuring", 00:10:05.274 "raid_level": "raid1", 00:10:05.274 "superblock": false, 00:10:05.274 "num_base_bdevs": 3, 00:10:05.274 "num_base_bdevs_discovered": 2, 00:10:05.274 "num_base_bdevs_operational": 3, 00:10:05.274 "base_bdevs_list": [ 00:10:05.274 { 00:10:05.274 "name": "BaseBdev1", 00:10:05.274 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:05.274 "is_configured": true, 00:10:05.274 "data_offset": 0, 00:10:05.274 "data_size": 65536 00:10:05.274 }, 00:10:05.274 { 00:10:05.274 "name": null, 00:10:05.274 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:05.274 "is_configured": false, 00:10:05.274 "data_offset": 0, 00:10:05.274 "data_size": 65536 00:10:05.274 }, 00:10:05.274 { 00:10:05.274 "name": "BaseBdev3", 00:10:05.274 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:05.274 "is_configured": true, 00:10:05.274 "data_offset": 0, 00:10:05.274 "data_size": 65536 00:10:05.274 } 00:10:05.274 ] 00:10:05.274 }' 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.274 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.533 [2024-11-10 15:19:11.849183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.533 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.534 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.792 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.792 "name": "Existed_Raid", 00:10:05.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.792 "strip_size_kb": 0, 00:10:05.792 "state": "configuring", 00:10:05.792 "raid_level": "raid1", 00:10:05.792 "superblock": false, 00:10:05.792 "num_base_bdevs": 3, 00:10:05.792 "num_base_bdevs_discovered": 1, 00:10:05.792 "num_base_bdevs_operational": 3, 00:10:05.792 "base_bdevs_list": [ 00:10:05.792 { 00:10:05.792 "name": null, 00:10:05.792 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:05.792 "is_configured": false, 00:10:05.792 "data_offset": 0, 00:10:05.792 "data_size": 65536 00:10:05.792 }, 00:10:05.792 { 00:10:05.792 "name": null, 00:10:05.792 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:05.792 "is_configured": false, 00:10:05.792 "data_offset": 0, 00:10:05.792 "data_size": 65536 00:10:05.792 }, 00:10:05.792 { 00:10:05.792 "name": "BaseBdev3", 00:10:05.792 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:05.792 "is_configured": true, 00:10:05.792 "data_offset": 0, 00:10:05.792 "data_size": 65536 00:10:05.792 } 00:10:05.792 ] 00:10:05.792 }' 00:10:05.792 15:19:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.792 15:19:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.051 [2024-11-10 15:19:12.352002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.051 "name": "Existed_Raid", 00:10:06.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.051 "strip_size_kb": 0, 00:10:06.051 "state": "configuring", 00:10:06.051 "raid_level": "raid1", 00:10:06.051 "superblock": false, 00:10:06.051 "num_base_bdevs": 3, 00:10:06.051 "num_base_bdevs_discovered": 2, 00:10:06.051 "num_base_bdevs_operational": 3, 00:10:06.051 "base_bdevs_list": [ 00:10:06.051 { 00:10:06.051 "name": null, 00:10:06.051 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:06.051 "is_configured": false, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 65536 00:10:06.051 }, 00:10:06.051 { 00:10:06.051 "name": "BaseBdev2", 00:10:06.051 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:06.051 "is_configured": true, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 65536 00:10:06.051 }, 00:10:06.051 { 00:10:06.051 "name": "BaseBdev3", 00:10:06.051 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:06.051 "is_configured": true, 00:10:06.051 "data_offset": 0, 00:10:06.051 "data_size": 65536 00:10:06.051 } 00:10:06.051 ] 00:10:06.051 }' 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.051 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 612f9607-54bc-4893-84d0-f72aa3192371 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 [2024-11-10 15:19:12.894971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.620 [2024-11-10 15:19:12.895031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:06.620 [2024-11-10 15:19:12.895044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:06.620 [2024-11-10 15:19:12.895276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:10:06.620 [2024-11-10 15:19:12.895438] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:06.620 [2024-11-10 15:19:12.895452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:06.620 [2024-11-10 15:19:12.895622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.620 NewBaseBdev 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.620 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.620 [ 00:10:06.621 { 00:10:06.621 "name": "NewBaseBdev", 00:10:06.621 "aliases": [ 00:10:06.621 "612f9607-54bc-4893-84d0-f72aa3192371" 00:10:06.621 ], 00:10:06.621 "product_name": "Malloc disk", 00:10:06.621 "block_size": 512, 00:10:06.621 "num_blocks": 65536, 00:10:06.621 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:06.621 "assigned_rate_limits": { 00:10:06.621 "rw_ios_per_sec": 0, 00:10:06.621 "rw_mbytes_per_sec": 0, 00:10:06.621 "r_mbytes_per_sec": 0, 00:10:06.621 "w_mbytes_per_sec": 0 00:10:06.621 }, 00:10:06.621 "claimed": true, 00:10:06.621 "claim_type": "exclusive_write", 00:10:06.621 "zoned": false, 00:10:06.621 "supported_io_types": { 00:10:06.621 "read": true, 00:10:06.621 "write": true, 00:10:06.621 "unmap": true, 00:10:06.621 "flush": true, 00:10:06.621 "reset": true, 00:10:06.621 "nvme_admin": false, 00:10:06.621 "nvme_io": false, 00:10:06.621 "nvme_io_md": false, 00:10:06.621 "write_zeroes": true, 00:10:06.621 "zcopy": true, 00:10:06.621 "get_zone_info": false, 00:10:06.621 "zone_management": false, 00:10:06.621 "zone_append": false, 00:10:06.621 "compare": false, 00:10:06.621 "compare_and_write": false, 00:10:06.621 "abort": true, 00:10:06.621 "seek_hole": false, 00:10:06.621 "seek_data": false, 00:10:06.621 "copy": true, 00:10:06.621 "nvme_iov_md": false 00:10:06.621 }, 00:10:06.621 "memory_domains": [ 00:10:06.621 { 00:10:06.621 "dma_device_id": "system", 00:10:06.621 "dma_device_type": 1 00:10:06.621 }, 00:10:06.621 { 00:10:06.621 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.621 "dma_device_type": 2 00:10:06.621 } 00:10:06.621 ], 00:10:06.621 "driver_specific": {} 00:10:06.621 } 00:10:06.621 ] 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.621 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.880 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.880 "name": "Existed_Raid", 00:10:06.880 "uuid": "47171b5b-a01c-4087-a281-f0b5366d9348", 00:10:06.880 "strip_size_kb": 0, 00:10:06.880 "state": "online", 00:10:06.880 "raid_level": "raid1", 00:10:06.880 "superblock": false, 00:10:06.880 "num_base_bdevs": 3, 00:10:06.880 "num_base_bdevs_discovered": 3, 00:10:06.880 "num_base_bdevs_operational": 3, 00:10:06.880 "base_bdevs_list": [ 00:10:06.880 { 00:10:06.880 "name": "NewBaseBdev", 00:10:06.880 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:06.880 "is_configured": true, 00:10:06.880 "data_offset": 0, 00:10:06.880 "data_size": 65536 00:10:06.880 }, 00:10:06.880 { 00:10:06.880 "name": "BaseBdev2", 00:10:06.880 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:06.880 "is_configured": true, 00:10:06.880 "data_offset": 0, 00:10:06.880 "data_size": 65536 00:10:06.880 }, 00:10:06.880 { 00:10:06.880 "name": "BaseBdev3", 00:10:06.880 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:06.880 "is_configured": true, 00:10:06.880 "data_offset": 0, 00:10:06.880 "data_size": 65536 00:10:06.880 } 00:10:06.880 ] 00:10:06.880 }' 00:10:06.880 15:19:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.880 15:19:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.139 [2024-11-10 15:19:13.419527] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.139 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:07.139 "name": "Existed_Raid", 00:10:07.139 "aliases": [ 00:10:07.139 "47171b5b-a01c-4087-a281-f0b5366d9348" 00:10:07.139 ], 00:10:07.139 "product_name": "Raid Volume", 00:10:07.139 "block_size": 512, 00:10:07.139 "num_blocks": 65536, 00:10:07.139 "uuid": "47171b5b-a01c-4087-a281-f0b5366d9348", 00:10:07.139 "assigned_rate_limits": { 00:10:07.139 "rw_ios_per_sec": 0, 00:10:07.139 "rw_mbytes_per_sec": 0, 00:10:07.139 "r_mbytes_per_sec": 0, 00:10:07.139 "w_mbytes_per_sec": 0 00:10:07.139 }, 00:10:07.139 "claimed": false, 00:10:07.139 "zoned": false, 00:10:07.139 "supported_io_types": { 00:10:07.139 "read": true, 00:10:07.139 "write": true, 00:10:07.139 "unmap": false, 00:10:07.139 "flush": false, 00:10:07.139 "reset": true, 00:10:07.139 "nvme_admin": false, 00:10:07.139 "nvme_io": false, 00:10:07.139 "nvme_io_md": false, 00:10:07.139 "write_zeroes": true, 00:10:07.140 "zcopy": false, 00:10:07.140 "get_zone_info": false, 00:10:07.140 "zone_management": false, 00:10:07.140 "zone_append": false, 00:10:07.140 "compare": false, 00:10:07.140 "compare_and_write": false, 00:10:07.140 "abort": false, 00:10:07.140 "seek_hole": false, 00:10:07.140 "seek_data": false, 00:10:07.140 "copy": false, 00:10:07.140 "nvme_iov_md": false 00:10:07.140 }, 00:10:07.140 "memory_domains": [ 00:10:07.140 { 00:10:07.140 "dma_device_id": "system", 00:10:07.140 "dma_device_type": 1 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.140 "dma_device_type": 2 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "dma_device_id": "system", 00:10:07.140 "dma_device_type": 1 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.140 "dma_device_type": 2 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "dma_device_id": "system", 00:10:07.140 "dma_device_type": 1 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.140 "dma_device_type": 2 00:10:07.140 } 00:10:07.140 ], 00:10:07.140 "driver_specific": { 00:10:07.140 "raid": { 00:10:07.140 "uuid": "47171b5b-a01c-4087-a281-f0b5366d9348", 00:10:07.140 "strip_size_kb": 0, 00:10:07.140 "state": "online", 00:10:07.140 "raid_level": "raid1", 00:10:07.140 "superblock": false, 00:10:07.140 "num_base_bdevs": 3, 00:10:07.140 "num_base_bdevs_discovered": 3, 00:10:07.140 "num_base_bdevs_operational": 3, 00:10:07.140 "base_bdevs_list": [ 00:10:07.140 { 00:10:07.140 "name": "NewBaseBdev", 00:10:07.140 "uuid": "612f9607-54bc-4893-84d0-f72aa3192371", 00:10:07.140 "is_configured": true, 00:10:07.140 "data_offset": 0, 00:10:07.140 "data_size": 65536 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "name": "BaseBdev2", 00:10:07.140 "uuid": "7b2ea7e5-88ba-467c-a1de-97cdb339be53", 00:10:07.140 "is_configured": true, 00:10:07.140 "data_offset": 0, 00:10:07.140 "data_size": 65536 00:10:07.140 }, 00:10:07.140 { 00:10:07.140 "name": "BaseBdev3", 00:10:07.140 "uuid": "850dd896-bde6-4904-b174-d499c717fa53", 00:10:07.140 "is_configured": true, 00:10:07.140 "data_offset": 0, 00:10:07.140 "data_size": 65536 00:10:07.140 } 00:10:07.140 ] 00:10:07.140 } 00:10:07.140 } 00:10:07.140 }' 00:10:07.140 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:07.400 BaseBdev2 00:10:07.400 BaseBdev3' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.400 [2024-11-10 15:19:13.675309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.400 [2024-11-10 15:19:13.675340] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.400 [2024-11-10 15:19:13.675421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.400 [2024-11-10 15:19:13.675667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.400 [2024-11-10 15:19:13.675685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79841 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 79841 ']' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 79841 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79841 00:10:07.400 killing process with pid 79841 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79841' 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 79841 00:10:07.400 [2024-11-10 15:19:13.720342] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.400 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 79841 00:10:07.400 [2024-11-10 15:19:13.751359] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.660 15:19:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:07.660 00:10:07.660 real 0m8.816s 00:10:07.660 user 0m15.077s 00:10:07.660 sys 0m1.801s 00:10:07.660 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:07.660 15:19:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.660 ************************************ 00:10:07.660 END TEST raid_state_function_test 00:10:07.660 ************************************ 00:10:07.919 15:19:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:07.919 15:19:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:07.919 15:19:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:07.919 15:19:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:07.919 ************************************ 00:10:07.919 START TEST raid_state_function_test_sb 00:10:07.919 ************************************ 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 3 true 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80447 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80447' 00:10:07.919 Process raid pid: 80447 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80447 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 80447 ']' 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:07.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:07.919 15:19:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.919 [2024-11-10 15:19:14.129279] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:07.919 [2024-11-10 15:19:14.129443] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.919 [2024-11-10 15:19:14.261629] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:08.179 [2024-11-10 15:19:14.289418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.179 [2024-11-10 15:19:14.313775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.179 [2024-11-10 15:19:14.356062] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.179 [2024-11-10 15:19:14.356107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.748 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:08.748 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:08.748 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:08.748 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.748 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.748 [2024-11-10 15:19:15.010471] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:08.748 [2024-11-10 15:19:15.010532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:08.748 [2024-11-10 15:19:15.010554] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.748 [2024-11-10 15:19:15.010563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.748 [2024-11-10 15:19:15.010574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.749 [2024-11-10 15:19:15.010583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.749 "name": "Existed_Raid", 00:10:08.749 "uuid": "21483f81-e70b-4e17-870b-56f24e37cbc4", 00:10:08.749 "strip_size_kb": 0, 00:10:08.749 "state": "configuring", 00:10:08.749 "raid_level": "raid1", 00:10:08.749 "superblock": true, 00:10:08.749 "num_base_bdevs": 3, 00:10:08.749 "num_base_bdevs_discovered": 0, 00:10:08.749 "num_base_bdevs_operational": 3, 00:10:08.749 "base_bdevs_list": [ 00:10:08.749 { 00:10:08.749 "name": "BaseBdev1", 00:10:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.749 "is_configured": false, 00:10:08.749 "data_offset": 0, 00:10:08.749 "data_size": 0 00:10:08.749 }, 00:10:08.749 { 00:10:08.749 "name": "BaseBdev2", 00:10:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.749 "is_configured": false, 00:10:08.749 "data_offset": 0, 00:10:08.749 "data_size": 0 00:10:08.749 }, 00:10:08.749 { 00:10:08.749 "name": "BaseBdev3", 00:10:08.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.749 "is_configured": false, 00:10:08.749 "data_offset": 0, 00:10:08.749 "data_size": 0 00:10:08.749 } 00:10:08.749 ] 00:10:08.749 }' 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.749 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.318 [2024-11-10 15:19:15.450476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.318 [2024-11-10 15:19:15.450520] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.318 [2024-11-10 15:19:15.458489] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:09.318 [2024-11-10 15:19:15.458528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:09.318 [2024-11-10 15:19:15.458538] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.318 [2024-11-10 15:19:15.458545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.318 [2024-11-10 15:19:15.458569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.318 [2024-11-10 15:19:15.458577] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.318 [2024-11-10 15:19:15.475302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.318 BaseBdev1 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.318 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.318 [ 00:10:09.318 { 00:10:09.318 "name": "BaseBdev1", 00:10:09.318 "aliases": [ 00:10:09.318 "dd6fb39d-a090-4c98-ab50-f44ea3f86652" 00:10:09.318 ], 00:10:09.318 "product_name": "Malloc disk", 00:10:09.318 "block_size": 512, 00:10:09.318 "num_blocks": 65536, 00:10:09.318 "uuid": "dd6fb39d-a090-4c98-ab50-f44ea3f86652", 00:10:09.318 "assigned_rate_limits": { 00:10:09.318 "rw_ios_per_sec": 0, 00:10:09.318 "rw_mbytes_per_sec": 0, 00:10:09.318 "r_mbytes_per_sec": 0, 00:10:09.318 "w_mbytes_per_sec": 0 00:10:09.318 }, 00:10:09.318 "claimed": true, 00:10:09.318 "claim_type": "exclusive_write", 00:10:09.318 "zoned": false, 00:10:09.318 "supported_io_types": { 00:10:09.318 "read": true, 00:10:09.318 "write": true, 00:10:09.318 "unmap": true, 00:10:09.319 "flush": true, 00:10:09.319 "reset": true, 00:10:09.319 "nvme_admin": false, 00:10:09.319 "nvme_io": false, 00:10:09.319 "nvme_io_md": false, 00:10:09.319 "write_zeroes": true, 00:10:09.319 "zcopy": true, 00:10:09.319 "get_zone_info": false, 00:10:09.319 "zone_management": false, 00:10:09.319 "zone_append": false, 00:10:09.319 "compare": false, 00:10:09.319 "compare_and_write": false, 00:10:09.319 "abort": true, 00:10:09.319 "seek_hole": false, 00:10:09.319 "seek_data": false, 00:10:09.319 "copy": true, 00:10:09.319 "nvme_iov_md": false 00:10:09.319 }, 00:10:09.319 "memory_domains": [ 00:10:09.319 { 00:10:09.319 "dma_device_id": "system", 00:10:09.319 "dma_device_type": 1 00:10:09.319 }, 00:10:09.319 { 00:10:09.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.319 "dma_device_type": 2 00:10:09.319 } 00:10:09.319 ], 00:10:09.319 "driver_specific": {} 00:10:09.319 } 00:10:09.319 ] 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.319 "name": "Existed_Raid", 00:10:09.319 "uuid": "f3f91951-4cdb-4305-8822-784282e09ea9", 00:10:09.319 "strip_size_kb": 0, 00:10:09.319 "state": "configuring", 00:10:09.319 "raid_level": "raid1", 00:10:09.319 "superblock": true, 00:10:09.319 "num_base_bdevs": 3, 00:10:09.319 "num_base_bdevs_discovered": 1, 00:10:09.319 "num_base_bdevs_operational": 3, 00:10:09.319 "base_bdevs_list": [ 00:10:09.319 { 00:10:09.319 "name": "BaseBdev1", 00:10:09.319 "uuid": "dd6fb39d-a090-4c98-ab50-f44ea3f86652", 00:10:09.319 "is_configured": true, 00:10:09.319 "data_offset": 2048, 00:10:09.319 "data_size": 63488 00:10:09.319 }, 00:10:09.319 { 00:10:09.319 "name": "BaseBdev2", 00:10:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.319 "is_configured": false, 00:10:09.319 "data_offset": 0, 00:10:09.319 "data_size": 0 00:10:09.319 }, 00:10:09.319 { 00:10:09.319 "name": "BaseBdev3", 00:10:09.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.319 "is_configured": false, 00:10:09.319 "data_offset": 0, 00:10:09.319 "data_size": 0 00:10:09.319 } 00:10:09.319 ] 00:10:09.319 }' 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.319 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 [2024-11-10 15:19:15.971496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:09.888 [2024-11-10 15:19:15.971559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 [2024-11-10 15:19:15.979526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:09.888 [2024-11-10 15:19:15.981429] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:09.888 [2024-11-10 15:19:15.981468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:09.888 [2024-11-10 15:19:15.981496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:09.888 [2024-11-10 15:19:15.981503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.888 15:19:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.888 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.888 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.888 "name": "Existed_Raid", 00:10:09.888 "uuid": "992acda6-a38c-40fa-a846-8c3a83bdc6da", 00:10:09.888 "strip_size_kb": 0, 00:10:09.888 "state": "configuring", 00:10:09.888 "raid_level": "raid1", 00:10:09.888 "superblock": true, 00:10:09.888 "num_base_bdevs": 3, 00:10:09.888 "num_base_bdevs_discovered": 1, 00:10:09.888 "num_base_bdevs_operational": 3, 00:10:09.888 "base_bdevs_list": [ 00:10:09.888 { 00:10:09.888 "name": "BaseBdev1", 00:10:09.888 "uuid": "dd6fb39d-a090-4c98-ab50-f44ea3f86652", 00:10:09.888 "is_configured": true, 00:10:09.888 "data_offset": 2048, 00:10:09.888 "data_size": 63488 00:10:09.888 }, 00:10:09.888 { 00:10:09.888 "name": "BaseBdev2", 00:10:09.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.888 "is_configured": false, 00:10:09.888 "data_offset": 0, 00:10:09.888 "data_size": 0 00:10:09.888 }, 00:10:09.888 { 00:10:09.888 "name": "BaseBdev3", 00:10:09.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.888 "is_configured": false, 00:10:09.888 "data_offset": 0, 00:10:09.888 "data_size": 0 00:10:09.888 } 00:10:09.888 ] 00:10:09.888 }' 00:10:09.888 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.888 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.148 [2024-11-10 15:19:16.398567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.148 BaseBdev2 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.148 [ 00:10:10.148 { 00:10:10.148 "name": "BaseBdev2", 00:10:10.148 "aliases": [ 00:10:10.148 "c4facd5f-8c39-49f8-ad24-f0dd470bcada" 00:10:10.148 ], 00:10:10.148 "product_name": "Malloc disk", 00:10:10.148 "block_size": 512, 00:10:10.148 "num_blocks": 65536, 00:10:10.148 "uuid": "c4facd5f-8c39-49f8-ad24-f0dd470bcada", 00:10:10.148 "assigned_rate_limits": { 00:10:10.148 "rw_ios_per_sec": 0, 00:10:10.148 "rw_mbytes_per_sec": 0, 00:10:10.148 "r_mbytes_per_sec": 0, 00:10:10.148 "w_mbytes_per_sec": 0 00:10:10.148 }, 00:10:10.148 "claimed": true, 00:10:10.148 "claim_type": "exclusive_write", 00:10:10.148 "zoned": false, 00:10:10.148 "supported_io_types": { 00:10:10.148 "read": true, 00:10:10.148 "write": true, 00:10:10.148 "unmap": true, 00:10:10.148 "flush": true, 00:10:10.148 "reset": true, 00:10:10.148 "nvme_admin": false, 00:10:10.148 "nvme_io": false, 00:10:10.148 "nvme_io_md": false, 00:10:10.148 "write_zeroes": true, 00:10:10.148 "zcopy": true, 00:10:10.148 "get_zone_info": false, 00:10:10.148 "zone_management": false, 00:10:10.148 "zone_append": false, 00:10:10.148 "compare": false, 00:10:10.148 "compare_and_write": false, 00:10:10.148 "abort": true, 00:10:10.148 "seek_hole": false, 00:10:10.148 "seek_data": false, 00:10:10.148 "copy": true, 00:10:10.148 "nvme_iov_md": false 00:10:10.148 }, 00:10:10.148 "memory_domains": [ 00:10:10.148 { 00:10:10.148 "dma_device_id": "system", 00:10:10.148 "dma_device_type": 1 00:10:10.148 }, 00:10:10.148 { 00:10:10.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.148 "dma_device_type": 2 00:10:10.148 } 00:10:10.148 ], 00:10:10.148 "driver_specific": {} 00:10:10.148 } 00:10:10.148 ] 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.148 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.148 "name": "Existed_Raid", 00:10:10.148 "uuid": "992acda6-a38c-40fa-a846-8c3a83bdc6da", 00:10:10.148 "strip_size_kb": 0, 00:10:10.148 "state": "configuring", 00:10:10.148 "raid_level": "raid1", 00:10:10.148 "superblock": true, 00:10:10.148 "num_base_bdevs": 3, 00:10:10.148 "num_base_bdevs_discovered": 2, 00:10:10.148 "num_base_bdevs_operational": 3, 00:10:10.148 "base_bdevs_list": [ 00:10:10.148 { 00:10:10.148 "name": "BaseBdev1", 00:10:10.148 "uuid": "dd6fb39d-a090-4c98-ab50-f44ea3f86652", 00:10:10.148 "is_configured": true, 00:10:10.148 "data_offset": 2048, 00:10:10.148 "data_size": 63488 00:10:10.148 }, 00:10:10.148 { 00:10:10.149 "name": "BaseBdev2", 00:10:10.149 "uuid": "c4facd5f-8c39-49f8-ad24-f0dd470bcada", 00:10:10.149 "is_configured": true, 00:10:10.149 "data_offset": 2048, 00:10:10.149 "data_size": 63488 00:10:10.149 }, 00:10:10.149 { 00:10:10.149 "name": "BaseBdev3", 00:10:10.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.149 "is_configured": false, 00:10:10.149 "data_offset": 0, 00:10:10.149 "data_size": 0 00:10:10.149 } 00:10:10.149 ] 00:10:10.149 }' 00:10:10.149 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.149 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.717 [2024-11-10 15:19:16.884517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.717 [2024-11-10 15:19:16.884749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:10.717 [2024-11-10 15:19:16.884774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.717 [2024-11-10 15:19:16.885137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:10.717 [2024-11-10 15:19:16.885316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:10.717 [2024-11-10 15:19:16.885340] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:10.717 BaseBdev3 00:10:10.717 [2024-11-10 15:19:16.885480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.717 [ 00:10:10.717 { 00:10:10.717 "name": "BaseBdev3", 00:10:10.717 "aliases": [ 00:10:10.717 "bd551fa4-b1b3-4ac9-a954-2658f439026a" 00:10:10.717 ], 00:10:10.717 "product_name": "Malloc disk", 00:10:10.717 "block_size": 512, 00:10:10.717 "num_blocks": 65536, 00:10:10.717 "uuid": "bd551fa4-b1b3-4ac9-a954-2658f439026a", 00:10:10.717 "assigned_rate_limits": { 00:10:10.717 "rw_ios_per_sec": 0, 00:10:10.717 "rw_mbytes_per_sec": 0, 00:10:10.717 "r_mbytes_per_sec": 0, 00:10:10.717 "w_mbytes_per_sec": 0 00:10:10.717 }, 00:10:10.717 "claimed": true, 00:10:10.717 "claim_type": "exclusive_write", 00:10:10.717 "zoned": false, 00:10:10.717 "supported_io_types": { 00:10:10.717 "read": true, 00:10:10.717 "write": true, 00:10:10.717 "unmap": true, 00:10:10.717 "flush": true, 00:10:10.717 "reset": true, 00:10:10.717 "nvme_admin": false, 00:10:10.717 "nvme_io": false, 00:10:10.717 "nvme_io_md": false, 00:10:10.717 "write_zeroes": true, 00:10:10.717 "zcopy": true, 00:10:10.717 "get_zone_info": false, 00:10:10.717 "zone_management": false, 00:10:10.717 "zone_append": false, 00:10:10.717 "compare": false, 00:10:10.717 "compare_and_write": false, 00:10:10.717 "abort": true, 00:10:10.717 "seek_hole": false, 00:10:10.717 "seek_data": false, 00:10:10.717 "copy": true, 00:10:10.717 "nvme_iov_md": false 00:10:10.717 }, 00:10:10.717 "memory_domains": [ 00:10:10.717 { 00:10:10.717 "dma_device_id": "system", 00:10:10.717 "dma_device_type": 1 00:10:10.717 }, 00:10:10.717 { 00:10:10.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.717 "dma_device_type": 2 00:10:10.717 } 00:10:10.717 ], 00:10:10.717 "driver_specific": {} 00:10:10.717 } 00:10:10.717 ] 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.717 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.717 "name": "Existed_Raid", 00:10:10.717 "uuid": "992acda6-a38c-40fa-a846-8c3a83bdc6da", 00:10:10.717 "strip_size_kb": 0, 00:10:10.717 "state": "online", 00:10:10.717 "raid_level": "raid1", 00:10:10.717 "superblock": true, 00:10:10.717 "num_base_bdevs": 3, 00:10:10.717 "num_base_bdevs_discovered": 3, 00:10:10.717 "num_base_bdevs_operational": 3, 00:10:10.717 "base_bdevs_list": [ 00:10:10.717 { 00:10:10.717 "name": "BaseBdev1", 00:10:10.718 "uuid": "dd6fb39d-a090-4c98-ab50-f44ea3f86652", 00:10:10.718 "is_configured": true, 00:10:10.718 "data_offset": 2048, 00:10:10.718 "data_size": 63488 00:10:10.718 }, 00:10:10.718 { 00:10:10.718 "name": "BaseBdev2", 00:10:10.718 "uuid": "c4facd5f-8c39-49f8-ad24-f0dd470bcada", 00:10:10.718 "is_configured": true, 00:10:10.718 "data_offset": 2048, 00:10:10.718 "data_size": 63488 00:10:10.718 }, 00:10:10.718 { 00:10:10.718 "name": "BaseBdev3", 00:10:10.718 "uuid": "bd551fa4-b1b3-4ac9-a954-2658f439026a", 00:10:10.718 "is_configured": true, 00:10:10.718 "data_offset": 2048, 00:10:10.718 "data_size": 63488 00:10:10.718 } 00:10:10.718 ] 00:10:10.718 }' 00:10:10.718 15:19:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.718 15:19:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.286 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 [2024-11-10 15:19:17.389000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.287 "name": "Existed_Raid", 00:10:11.287 "aliases": [ 00:10:11.287 "992acda6-a38c-40fa-a846-8c3a83bdc6da" 00:10:11.287 ], 00:10:11.287 "product_name": "Raid Volume", 00:10:11.287 "block_size": 512, 00:10:11.287 "num_blocks": 63488, 00:10:11.287 "uuid": "992acda6-a38c-40fa-a846-8c3a83bdc6da", 00:10:11.287 "assigned_rate_limits": { 00:10:11.287 "rw_ios_per_sec": 0, 00:10:11.287 "rw_mbytes_per_sec": 0, 00:10:11.287 "r_mbytes_per_sec": 0, 00:10:11.287 "w_mbytes_per_sec": 0 00:10:11.287 }, 00:10:11.287 "claimed": false, 00:10:11.287 "zoned": false, 00:10:11.287 "supported_io_types": { 00:10:11.287 "read": true, 00:10:11.287 "write": true, 00:10:11.287 "unmap": false, 00:10:11.287 "flush": false, 00:10:11.287 "reset": true, 00:10:11.287 "nvme_admin": false, 00:10:11.287 "nvme_io": false, 00:10:11.287 "nvme_io_md": false, 00:10:11.287 "write_zeroes": true, 00:10:11.287 "zcopy": false, 00:10:11.287 "get_zone_info": false, 00:10:11.287 "zone_management": false, 00:10:11.287 "zone_append": false, 00:10:11.287 "compare": false, 00:10:11.287 "compare_and_write": false, 00:10:11.287 "abort": false, 00:10:11.287 "seek_hole": false, 00:10:11.287 "seek_data": false, 00:10:11.287 "copy": false, 00:10:11.287 "nvme_iov_md": false 00:10:11.287 }, 00:10:11.287 "memory_domains": [ 00:10:11.287 { 00:10:11.287 "dma_device_id": "system", 00:10:11.287 "dma_device_type": 1 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.287 "dma_device_type": 2 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "dma_device_id": "system", 00:10:11.287 "dma_device_type": 1 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.287 "dma_device_type": 2 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "dma_device_id": "system", 00:10:11.287 "dma_device_type": 1 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.287 "dma_device_type": 2 00:10:11.287 } 00:10:11.287 ], 00:10:11.287 "driver_specific": { 00:10:11.287 "raid": { 00:10:11.287 "uuid": "992acda6-a38c-40fa-a846-8c3a83bdc6da", 00:10:11.287 "strip_size_kb": 0, 00:10:11.287 "state": "online", 00:10:11.287 "raid_level": "raid1", 00:10:11.287 "superblock": true, 00:10:11.287 "num_base_bdevs": 3, 00:10:11.287 "num_base_bdevs_discovered": 3, 00:10:11.287 "num_base_bdevs_operational": 3, 00:10:11.287 "base_bdevs_list": [ 00:10:11.287 { 00:10:11.287 "name": "BaseBdev1", 00:10:11.287 "uuid": "dd6fb39d-a090-4c98-ab50-f44ea3f86652", 00:10:11.287 "is_configured": true, 00:10:11.287 "data_offset": 2048, 00:10:11.287 "data_size": 63488 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "name": "BaseBdev2", 00:10:11.287 "uuid": "c4facd5f-8c39-49f8-ad24-f0dd470bcada", 00:10:11.287 "is_configured": true, 00:10:11.287 "data_offset": 2048, 00:10:11.287 "data_size": 63488 00:10:11.287 }, 00:10:11.287 { 00:10:11.287 "name": "BaseBdev3", 00:10:11.287 "uuid": "bd551fa4-b1b3-4ac9-a954-2658f439026a", 00:10:11.287 "is_configured": true, 00:10:11.287 "data_offset": 2048, 00:10:11.287 "data_size": 63488 00:10:11.287 } 00:10:11.287 ] 00:10:11.287 } 00:10:11.287 } 00:10:11.287 }' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:11.287 BaseBdev2 00:10:11.287 BaseBdev3' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.287 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.287 [2024-11-10 15:19:17.636805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.547 "name": "Existed_Raid", 00:10:11.547 "uuid": "992acda6-a38c-40fa-a846-8c3a83bdc6da", 00:10:11.547 "strip_size_kb": 0, 00:10:11.547 "state": "online", 00:10:11.547 "raid_level": "raid1", 00:10:11.547 "superblock": true, 00:10:11.547 "num_base_bdevs": 3, 00:10:11.547 "num_base_bdevs_discovered": 2, 00:10:11.547 "num_base_bdevs_operational": 2, 00:10:11.547 "base_bdevs_list": [ 00:10:11.547 { 00:10:11.547 "name": null, 00:10:11.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.547 "is_configured": false, 00:10:11.547 "data_offset": 0, 00:10:11.547 "data_size": 63488 00:10:11.547 }, 00:10:11.547 { 00:10:11.547 "name": "BaseBdev2", 00:10:11.547 "uuid": "c4facd5f-8c39-49f8-ad24-f0dd470bcada", 00:10:11.547 "is_configured": true, 00:10:11.547 "data_offset": 2048, 00:10:11.547 "data_size": 63488 00:10:11.547 }, 00:10:11.547 { 00:10:11.547 "name": "BaseBdev3", 00:10:11.547 "uuid": "bd551fa4-b1b3-4ac9-a954-2658f439026a", 00:10:11.547 "is_configured": true, 00:10:11.547 "data_offset": 2048, 00:10:11.547 "data_size": 63488 00:10:11.547 } 00:10:11.547 ] 00:10:11.547 }' 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.547 15:19:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.859 [2024-11-10 15:19:18.132538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.859 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.132 [2024-11-10 15:19:18.203879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.132 [2024-11-10 15:19:18.203994] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.132 [2024-11-10 15:19:18.215681] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.132 [2024-11-10 15:19:18.215741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.132 [2024-11-10 15:19:18.215762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:12.132 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 BaseBdev2 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 [ 00:10:12.133 { 00:10:12.133 "name": "BaseBdev2", 00:10:12.133 "aliases": [ 00:10:12.133 "b0b4a189-d389-4ed5-90ba-e8edaee93c3c" 00:10:12.133 ], 00:10:12.133 "product_name": "Malloc disk", 00:10:12.133 "block_size": 512, 00:10:12.133 "num_blocks": 65536, 00:10:12.133 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:12.133 "assigned_rate_limits": { 00:10:12.133 "rw_ios_per_sec": 0, 00:10:12.133 "rw_mbytes_per_sec": 0, 00:10:12.133 "r_mbytes_per_sec": 0, 00:10:12.133 "w_mbytes_per_sec": 0 00:10:12.133 }, 00:10:12.133 "claimed": false, 00:10:12.133 "zoned": false, 00:10:12.133 "supported_io_types": { 00:10:12.133 "read": true, 00:10:12.133 "write": true, 00:10:12.133 "unmap": true, 00:10:12.133 "flush": true, 00:10:12.133 "reset": true, 00:10:12.133 "nvme_admin": false, 00:10:12.133 "nvme_io": false, 00:10:12.133 "nvme_io_md": false, 00:10:12.133 "write_zeroes": true, 00:10:12.133 "zcopy": true, 00:10:12.133 "get_zone_info": false, 00:10:12.133 "zone_management": false, 00:10:12.133 "zone_append": false, 00:10:12.133 "compare": false, 00:10:12.133 "compare_and_write": false, 00:10:12.133 "abort": true, 00:10:12.133 "seek_hole": false, 00:10:12.133 "seek_data": false, 00:10:12.133 "copy": true, 00:10:12.133 "nvme_iov_md": false 00:10:12.133 }, 00:10:12.133 "memory_domains": [ 00:10:12.133 { 00:10:12.133 "dma_device_id": "system", 00:10:12.133 "dma_device_type": 1 00:10:12.133 }, 00:10:12.133 { 00:10:12.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.133 "dma_device_type": 2 00:10:12.133 } 00:10:12.133 ], 00:10:12.133 "driver_specific": {} 00:10:12.133 } 00:10:12.133 ] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 BaseBdev3 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 [ 00:10:12.133 { 00:10:12.133 "name": "BaseBdev3", 00:10:12.133 "aliases": [ 00:10:12.133 "4e7843e9-6641-4753-b15a-39684cecf063" 00:10:12.133 ], 00:10:12.133 "product_name": "Malloc disk", 00:10:12.133 "block_size": 512, 00:10:12.133 "num_blocks": 65536, 00:10:12.133 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:12.133 "assigned_rate_limits": { 00:10:12.133 "rw_ios_per_sec": 0, 00:10:12.133 "rw_mbytes_per_sec": 0, 00:10:12.133 "r_mbytes_per_sec": 0, 00:10:12.133 "w_mbytes_per_sec": 0 00:10:12.133 }, 00:10:12.133 "claimed": false, 00:10:12.133 "zoned": false, 00:10:12.133 "supported_io_types": { 00:10:12.133 "read": true, 00:10:12.133 "write": true, 00:10:12.133 "unmap": true, 00:10:12.133 "flush": true, 00:10:12.133 "reset": true, 00:10:12.133 "nvme_admin": false, 00:10:12.133 "nvme_io": false, 00:10:12.133 "nvme_io_md": false, 00:10:12.133 "write_zeroes": true, 00:10:12.133 "zcopy": true, 00:10:12.133 "get_zone_info": false, 00:10:12.133 "zone_management": false, 00:10:12.133 "zone_append": false, 00:10:12.133 "compare": false, 00:10:12.133 "compare_and_write": false, 00:10:12.133 "abort": true, 00:10:12.133 "seek_hole": false, 00:10:12.133 "seek_data": false, 00:10:12.133 "copy": true, 00:10:12.133 "nvme_iov_md": false 00:10:12.133 }, 00:10:12.133 "memory_domains": [ 00:10:12.133 { 00:10:12.133 "dma_device_id": "system", 00:10:12.133 "dma_device_type": 1 00:10:12.133 }, 00:10:12.133 { 00:10:12.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.133 "dma_device_type": 2 00:10:12.133 } 00:10:12.133 ], 00:10:12.133 "driver_specific": {} 00:10:12.133 } 00:10:12.133 ] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 [2024-11-10 15:19:18.358899] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:12.133 [2024-11-10 15:19:18.358991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:12.133 [2024-11-10 15:19:18.359072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:12.133 [2024-11-10 15:19:18.360998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.133 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.133 "name": "Existed_Raid", 00:10:12.133 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:12.133 "strip_size_kb": 0, 00:10:12.134 "state": "configuring", 00:10:12.134 "raid_level": "raid1", 00:10:12.134 "superblock": true, 00:10:12.134 "num_base_bdevs": 3, 00:10:12.134 "num_base_bdevs_discovered": 2, 00:10:12.134 "num_base_bdevs_operational": 3, 00:10:12.134 "base_bdevs_list": [ 00:10:12.134 { 00:10:12.134 "name": "BaseBdev1", 00:10:12.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.134 "is_configured": false, 00:10:12.134 "data_offset": 0, 00:10:12.134 "data_size": 0 00:10:12.134 }, 00:10:12.134 { 00:10:12.134 "name": "BaseBdev2", 00:10:12.134 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:12.134 "is_configured": true, 00:10:12.134 "data_offset": 2048, 00:10:12.134 "data_size": 63488 00:10:12.134 }, 00:10:12.134 { 00:10:12.134 "name": "BaseBdev3", 00:10:12.134 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:12.134 "is_configured": true, 00:10:12.134 "data_offset": 2048, 00:10:12.134 "data_size": 63488 00:10:12.134 } 00:10:12.134 ] 00:10:12.134 }' 00:10:12.134 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.134 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.703 [2024-11-10 15:19:18.815033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.703 "name": "Existed_Raid", 00:10:12.703 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:12.703 "strip_size_kb": 0, 00:10:12.703 "state": "configuring", 00:10:12.703 "raid_level": "raid1", 00:10:12.703 "superblock": true, 00:10:12.703 "num_base_bdevs": 3, 00:10:12.703 "num_base_bdevs_discovered": 1, 00:10:12.703 "num_base_bdevs_operational": 3, 00:10:12.703 "base_bdevs_list": [ 00:10:12.703 { 00:10:12.703 "name": "BaseBdev1", 00:10:12.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.703 "is_configured": false, 00:10:12.703 "data_offset": 0, 00:10:12.703 "data_size": 0 00:10:12.703 }, 00:10:12.703 { 00:10:12.703 "name": null, 00:10:12.703 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:12.703 "is_configured": false, 00:10:12.703 "data_offset": 0, 00:10:12.703 "data_size": 63488 00:10:12.703 }, 00:10:12.703 { 00:10:12.703 "name": "BaseBdev3", 00:10:12.703 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:12.703 "is_configured": true, 00:10:12.703 "data_offset": 2048, 00:10:12.703 "data_size": 63488 00:10:12.703 } 00:10:12.703 ] 00:10:12.703 }' 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.703 15:19:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.962 [2024-11-10 15:19:19.322120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.962 BaseBdev1 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:12.962 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.222 [ 00:10:13.222 { 00:10:13.222 "name": "BaseBdev1", 00:10:13.222 "aliases": [ 00:10:13.222 "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9" 00:10:13.222 ], 00:10:13.222 "product_name": "Malloc disk", 00:10:13.222 "block_size": 512, 00:10:13.222 "num_blocks": 65536, 00:10:13.222 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:13.222 "assigned_rate_limits": { 00:10:13.222 "rw_ios_per_sec": 0, 00:10:13.222 "rw_mbytes_per_sec": 0, 00:10:13.222 "r_mbytes_per_sec": 0, 00:10:13.222 "w_mbytes_per_sec": 0 00:10:13.222 }, 00:10:13.222 "claimed": true, 00:10:13.222 "claim_type": "exclusive_write", 00:10:13.222 "zoned": false, 00:10:13.222 "supported_io_types": { 00:10:13.222 "read": true, 00:10:13.222 "write": true, 00:10:13.222 "unmap": true, 00:10:13.222 "flush": true, 00:10:13.222 "reset": true, 00:10:13.222 "nvme_admin": false, 00:10:13.222 "nvme_io": false, 00:10:13.222 "nvme_io_md": false, 00:10:13.222 "write_zeroes": true, 00:10:13.222 "zcopy": true, 00:10:13.222 "get_zone_info": false, 00:10:13.222 "zone_management": false, 00:10:13.222 "zone_append": false, 00:10:13.222 "compare": false, 00:10:13.222 "compare_and_write": false, 00:10:13.222 "abort": true, 00:10:13.222 "seek_hole": false, 00:10:13.222 "seek_data": false, 00:10:13.222 "copy": true, 00:10:13.222 "nvme_iov_md": false 00:10:13.222 }, 00:10:13.222 "memory_domains": [ 00:10:13.222 { 00:10:13.222 "dma_device_id": "system", 00:10:13.222 "dma_device_type": 1 00:10:13.222 }, 00:10:13.222 { 00:10:13.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.222 "dma_device_type": 2 00:10:13.222 } 00:10:13.222 ], 00:10:13.222 "driver_specific": {} 00:10:13.222 } 00:10:13.222 ] 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.222 "name": "Existed_Raid", 00:10:13.222 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:13.222 "strip_size_kb": 0, 00:10:13.222 "state": "configuring", 00:10:13.222 "raid_level": "raid1", 00:10:13.222 "superblock": true, 00:10:13.222 "num_base_bdevs": 3, 00:10:13.222 "num_base_bdevs_discovered": 2, 00:10:13.222 "num_base_bdevs_operational": 3, 00:10:13.222 "base_bdevs_list": [ 00:10:13.222 { 00:10:13.222 "name": "BaseBdev1", 00:10:13.222 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:13.222 "is_configured": true, 00:10:13.222 "data_offset": 2048, 00:10:13.222 "data_size": 63488 00:10:13.222 }, 00:10:13.222 { 00:10:13.222 "name": null, 00:10:13.222 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:13.222 "is_configured": false, 00:10:13.222 "data_offset": 0, 00:10:13.222 "data_size": 63488 00:10:13.222 }, 00:10:13.222 { 00:10:13.222 "name": "BaseBdev3", 00:10:13.222 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:13.222 "is_configured": true, 00:10:13.222 "data_offset": 2048, 00:10:13.222 "data_size": 63488 00:10:13.222 } 00:10:13.222 ] 00:10:13.222 }' 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.222 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.481 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.482 [2024-11-10 15:19:19.762304] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.482 "name": "Existed_Raid", 00:10:13.482 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:13.482 "strip_size_kb": 0, 00:10:13.482 "state": "configuring", 00:10:13.482 "raid_level": "raid1", 00:10:13.482 "superblock": true, 00:10:13.482 "num_base_bdevs": 3, 00:10:13.482 "num_base_bdevs_discovered": 1, 00:10:13.482 "num_base_bdevs_operational": 3, 00:10:13.482 "base_bdevs_list": [ 00:10:13.482 { 00:10:13.482 "name": "BaseBdev1", 00:10:13.482 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:13.482 "is_configured": true, 00:10:13.482 "data_offset": 2048, 00:10:13.482 "data_size": 63488 00:10:13.482 }, 00:10:13.482 { 00:10:13.482 "name": null, 00:10:13.482 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:13.482 "is_configured": false, 00:10:13.482 "data_offset": 0, 00:10:13.482 "data_size": 63488 00:10:13.482 }, 00:10:13.482 { 00:10:13.482 "name": null, 00:10:13.482 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:13.482 "is_configured": false, 00:10:13.482 "data_offset": 0, 00:10:13.482 "data_size": 63488 00:10:13.482 } 00:10:13.482 ] 00:10:13.482 }' 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.482 15:19:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.049 [2024-11-10 15:19:20.226487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.049 "name": "Existed_Raid", 00:10:14.049 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:14.049 "strip_size_kb": 0, 00:10:14.049 "state": "configuring", 00:10:14.049 "raid_level": "raid1", 00:10:14.049 "superblock": true, 00:10:14.049 "num_base_bdevs": 3, 00:10:14.049 "num_base_bdevs_discovered": 2, 00:10:14.049 "num_base_bdevs_operational": 3, 00:10:14.049 "base_bdevs_list": [ 00:10:14.049 { 00:10:14.049 "name": "BaseBdev1", 00:10:14.049 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:14.049 "is_configured": true, 00:10:14.049 "data_offset": 2048, 00:10:14.049 "data_size": 63488 00:10:14.049 }, 00:10:14.049 { 00:10:14.049 "name": null, 00:10:14.049 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:14.049 "is_configured": false, 00:10:14.049 "data_offset": 0, 00:10:14.049 "data_size": 63488 00:10:14.049 }, 00:10:14.049 { 00:10:14.049 "name": "BaseBdev3", 00:10:14.049 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:14.049 "is_configured": true, 00:10:14.049 "data_offset": 2048, 00:10:14.049 "data_size": 63488 00:10:14.049 } 00:10:14.049 ] 00:10:14.049 }' 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.049 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.309 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.309 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.309 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.309 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.568 [2024-11-10 15:19:20.698614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.568 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.568 "name": "Existed_Raid", 00:10:14.568 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:14.568 "strip_size_kb": 0, 00:10:14.568 "state": "configuring", 00:10:14.568 "raid_level": "raid1", 00:10:14.568 "superblock": true, 00:10:14.568 "num_base_bdevs": 3, 00:10:14.568 "num_base_bdevs_discovered": 1, 00:10:14.568 "num_base_bdevs_operational": 3, 00:10:14.568 "base_bdevs_list": [ 00:10:14.568 { 00:10:14.568 "name": null, 00:10:14.568 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:14.568 "is_configured": false, 00:10:14.568 "data_offset": 0, 00:10:14.568 "data_size": 63488 00:10:14.568 }, 00:10:14.568 { 00:10:14.569 "name": null, 00:10:14.569 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:14.569 "is_configured": false, 00:10:14.569 "data_offset": 0, 00:10:14.569 "data_size": 63488 00:10:14.569 }, 00:10:14.569 { 00:10:14.569 "name": "BaseBdev3", 00:10:14.569 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:14.569 "is_configured": true, 00:10:14.569 "data_offset": 2048, 00:10:14.569 "data_size": 63488 00:10:14.569 } 00:10:14.569 ] 00:10:14.569 }' 00:10:14.569 15:19:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.569 15:19:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.828 [2024-11-10 15:19:21.181385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.828 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.087 "name": "Existed_Raid", 00:10:15.087 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:15.087 "strip_size_kb": 0, 00:10:15.087 "state": "configuring", 00:10:15.087 "raid_level": "raid1", 00:10:15.087 "superblock": true, 00:10:15.087 "num_base_bdevs": 3, 00:10:15.087 "num_base_bdevs_discovered": 2, 00:10:15.087 "num_base_bdevs_operational": 3, 00:10:15.087 "base_bdevs_list": [ 00:10:15.087 { 00:10:15.087 "name": null, 00:10:15.087 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:15.087 "is_configured": false, 00:10:15.087 "data_offset": 0, 00:10:15.087 "data_size": 63488 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "name": "BaseBdev2", 00:10:15.087 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:15.087 "is_configured": true, 00:10:15.087 "data_offset": 2048, 00:10:15.087 "data_size": 63488 00:10:15.087 }, 00:10:15.087 { 00:10:15.087 "name": "BaseBdev3", 00:10:15.087 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:15.087 "is_configured": true, 00:10:15.087 "data_offset": 2048, 00:10:15.087 "data_size": 63488 00:10:15.087 } 00:10:15.087 ] 00:10:15.087 }' 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.087 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.347 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.607 NewBaseBdev 00:10:15.607 [2024-11-10 15:19:21.740439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.607 [2024-11-10 15:19:21.740611] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:15.607 [2024-11-10 15:19:21.740628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:15.607 [2024-11-10 15:19:21.740849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:10:15.607 [2024-11-10 15:19:21.740974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:15.607 [2024-11-10 15:19:21.740983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:15.607 [2024-11-10 15:19:21.741096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.607 [ 00:10:15.607 { 00:10:15.607 "name": "NewBaseBdev", 00:10:15.607 "aliases": [ 00:10:15.607 "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9" 00:10:15.607 ], 00:10:15.607 "product_name": "Malloc disk", 00:10:15.607 "block_size": 512, 00:10:15.607 "num_blocks": 65536, 00:10:15.607 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:15.607 "assigned_rate_limits": { 00:10:15.607 "rw_ios_per_sec": 0, 00:10:15.607 "rw_mbytes_per_sec": 0, 00:10:15.607 "r_mbytes_per_sec": 0, 00:10:15.607 "w_mbytes_per_sec": 0 00:10:15.607 }, 00:10:15.607 "claimed": true, 00:10:15.607 "claim_type": "exclusive_write", 00:10:15.607 "zoned": false, 00:10:15.607 "supported_io_types": { 00:10:15.607 "read": true, 00:10:15.607 "write": true, 00:10:15.607 "unmap": true, 00:10:15.607 "flush": true, 00:10:15.607 "reset": true, 00:10:15.607 "nvme_admin": false, 00:10:15.607 "nvme_io": false, 00:10:15.607 "nvme_io_md": false, 00:10:15.607 "write_zeroes": true, 00:10:15.607 "zcopy": true, 00:10:15.607 "get_zone_info": false, 00:10:15.607 "zone_management": false, 00:10:15.607 "zone_append": false, 00:10:15.607 "compare": false, 00:10:15.607 "compare_and_write": false, 00:10:15.607 "abort": true, 00:10:15.607 "seek_hole": false, 00:10:15.607 "seek_data": false, 00:10:15.607 "copy": true, 00:10:15.607 "nvme_iov_md": false 00:10:15.607 }, 00:10:15.607 "memory_domains": [ 00:10:15.607 { 00:10:15.607 "dma_device_id": "system", 00:10:15.607 "dma_device_type": 1 00:10:15.607 }, 00:10:15.607 { 00:10:15.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.607 "dma_device_type": 2 00:10:15.607 } 00:10:15.607 ], 00:10:15.607 "driver_specific": {} 00:10:15.607 } 00:10:15.607 ] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.607 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.607 "name": "Existed_Raid", 00:10:15.607 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:15.607 "strip_size_kb": 0, 00:10:15.607 "state": "online", 00:10:15.607 "raid_level": "raid1", 00:10:15.607 "superblock": true, 00:10:15.607 "num_base_bdevs": 3, 00:10:15.607 "num_base_bdevs_discovered": 3, 00:10:15.608 "num_base_bdevs_operational": 3, 00:10:15.608 "base_bdevs_list": [ 00:10:15.608 { 00:10:15.608 "name": "NewBaseBdev", 00:10:15.608 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:15.608 "is_configured": true, 00:10:15.608 "data_offset": 2048, 00:10:15.608 "data_size": 63488 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "name": "BaseBdev2", 00:10:15.608 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:15.608 "is_configured": true, 00:10:15.608 "data_offset": 2048, 00:10:15.608 "data_size": 63488 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "name": "BaseBdev3", 00:10:15.608 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:15.608 "is_configured": true, 00:10:15.608 "data_offset": 2048, 00:10:15.608 "data_size": 63488 00:10:15.608 } 00:10:15.608 ] 00:10:15.608 }' 00:10:15.608 15:19:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.608 15:19:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.867 [2024-11-10 15:19:22.204963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.867 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.126 "name": "Existed_Raid", 00:10:16.126 "aliases": [ 00:10:16.126 "151a67e3-f882-46ad-87f5-7f74125f035f" 00:10:16.126 ], 00:10:16.126 "product_name": "Raid Volume", 00:10:16.126 "block_size": 512, 00:10:16.126 "num_blocks": 63488, 00:10:16.126 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:16.126 "assigned_rate_limits": { 00:10:16.126 "rw_ios_per_sec": 0, 00:10:16.126 "rw_mbytes_per_sec": 0, 00:10:16.126 "r_mbytes_per_sec": 0, 00:10:16.126 "w_mbytes_per_sec": 0 00:10:16.126 }, 00:10:16.126 "claimed": false, 00:10:16.126 "zoned": false, 00:10:16.126 "supported_io_types": { 00:10:16.126 "read": true, 00:10:16.126 "write": true, 00:10:16.126 "unmap": false, 00:10:16.126 "flush": false, 00:10:16.126 "reset": true, 00:10:16.126 "nvme_admin": false, 00:10:16.126 "nvme_io": false, 00:10:16.126 "nvme_io_md": false, 00:10:16.126 "write_zeroes": true, 00:10:16.126 "zcopy": false, 00:10:16.126 "get_zone_info": false, 00:10:16.126 "zone_management": false, 00:10:16.126 "zone_append": false, 00:10:16.126 "compare": false, 00:10:16.126 "compare_and_write": false, 00:10:16.126 "abort": false, 00:10:16.126 "seek_hole": false, 00:10:16.126 "seek_data": false, 00:10:16.126 "copy": false, 00:10:16.126 "nvme_iov_md": false 00:10:16.126 }, 00:10:16.126 "memory_domains": [ 00:10:16.126 { 00:10:16.126 "dma_device_id": "system", 00:10:16.126 "dma_device_type": 1 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.126 "dma_device_type": 2 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "dma_device_id": "system", 00:10:16.126 "dma_device_type": 1 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.126 "dma_device_type": 2 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "dma_device_id": "system", 00:10:16.126 "dma_device_type": 1 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.126 "dma_device_type": 2 00:10:16.126 } 00:10:16.126 ], 00:10:16.126 "driver_specific": { 00:10:16.126 "raid": { 00:10:16.126 "uuid": "151a67e3-f882-46ad-87f5-7f74125f035f", 00:10:16.126 "strip_size_kb": 0, 00:10:16.126 "state": "online", 00:10:16.126 "raid_level": "raid1", 00:10:16.126 "superblock": true, 00:10:16.126 "num_base_bdevs": 3, 00:10:16.126 "num_base_bdevs_discovered": 3, 00:10:16.126 "num_base_bdevs_operational": 3, 00:10:16.126 "base_bdevs_list": [ 00:10:16.126 { 00:10:16.126 "name": "NewBaseBdev", 00:10:16.126 "uuid": "1f1f62b1-fbdd-41fb-a0b1-37510bb1bbd9", 00:10:16.126 "is_configured": true, 00:10:16.126 "data_offset": 2048, 00:10:16.126 "data_size": 63488 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "name": "BaseBdev2", 00:10:16.126 "uuid": "b0b4a189-d389-4ed5-90ba-e8edaee93c3c", 00:10:16.126 "is_configured": true, 00:10:16.126 "data_offset": 2048, 00:10:16.126 "data_size": 63488 00:10:16.126 }, 00:10:16.126 { 00:10:16.126 "name": "BaseBdev3", 00:10:16.126 "uuid": "4e7843e9-6641-4753-b15a-39684cecf063", 00:10:16.126 "is_configured": true, 00:10:16.126 "data_offset": 2048, 00:10:16.126 "data_size": 63488 00:10:16.126 } 00:10:16.126 ] 00:10:16.126 } 00:10:16.126 } 00:10:16.126 }' 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.126 BaseBdev2 00:10:16.126 BaseBdev3' 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.126 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.127 [2024-11-10 15:19:22.460701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.127 [2024-11-10 15:19:22.460727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.127 [2024-11-10 15:19:22.460795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.127 [2024-11-10 15:19:22.461065] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.127 [2024-11-10 15:19:22.461082] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80447 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 80447 ']' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 80447 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:16.127 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80447 00:10:16.385 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:16.385 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:16.385 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80447' 00:10:16.385 killing process with pid 80447 00:10:16.385 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 80447 00:10:16.385 [2024-11-10 15:19:22.507247] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.385 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 80447 00:10:16.385 [2024-11-10 15:19:22.538904] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.645 15:19:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:16.645 00:10:16.645 real 0m8.719s 00:10:16.645 user 0m14.927s 00:10:16.645 sys 0m1.762s 00:10:16.645 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:16.645 ************************************ 00:10:16.645 END TEST raid_state_function_test_sb 00:10:16.645 ************************************ 00:10:16.645 15:19:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.645 15:19:22 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:16.645 15:19:22 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:16.645 15:19:22 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:16.645 15:19:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.645 ************************************ 00:10:16.645 START TEST raid_superblock_test 00:10:16.645 ************************************ 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 3 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81045 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81045 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 81045 ']' 00:10:16.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:16.645 15:19:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.645 [2024-11-10 15:19:22.914594] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:16.645 [2024-11-10 15:19:22.914833] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81045 ] 00:10:16.905 [2024-11-10 15:19:23.046307] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:16.905 [2024-11-10 15:19:23.082559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.905 [2024-11-10 15:19:23.108942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.905 [2024-11-10 15:19:23.151479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.905 [2024-11-10 15:19:23.151595] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.472 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 malloc1 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 [2024-11-10 15:19:23.758804] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.473 [2024-11-10 15:19:23.758915] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.473 [2024-11-10 15:19:23.758975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:17.473 [2024-11-10 15:19:23.759036] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.473 [2024-11-10 15:19:23.761216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.473 [2024-11-10 15:19:23.761285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.473 pt1 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 malloc2 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 [2024-11-10 15:19:23.787490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:17.473 [2024-11-10 15:19:23.787592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.473 [2024-11-10 15:19:23.787628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:17.473 [2024-11-10 15:19:23.787657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.473 [2024-11-10 15:19:23.789928] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.473 [2024-11-10 15:19:23.789999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:17.473 pt2 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 malloc3 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 [2024-11-10 15:19:23.820157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:17.473 [2024-11-10 15:19:23.820210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.473 [2024-11-10 15:19:23.820230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:17.473 [2024-11-10 15:19:23.820239] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.473 [2024-11-10 15:19:23.822326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.473 [2024-11-10 15:19:23.822401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:17.473 pt3 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.473 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.473 [2024-11-10 15:19:23.832191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.733 [2024-11-10 15:19:23.834061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.733 [2024-11-10 15:19:23.834128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.733 [2024-11-10 15:19:23.834270] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:17.733 [2024-11-10 15:19:23.834284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.733 [2024-11-10 15:19:23.834540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:17.733 [2024-11-10 15:19:23.834689] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:17.733 [2024-11-10 15:19:23.834700] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:17.733 [2024-11-10 15:19:23.834834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.733 "name": "raid_bdev1", 00:10:17.733 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:17.733 "strip_size_kb": 0, 00:10:17.733 "state": "online", 00:10:17.733 "raid_level": "raid1", 00:10:17.733 "superblock": true, 00:10:17.733 "num_base_bdevs": 3, 00:10:17.733 "num_base_bdevs_discovered": 3, 00:10:17.733 "num_base_bdevs_operational": 3, 00:10:17.733 "base_bdevs_list": [ 00:10:17.733 { 00:10:17.733 "name": "pt1", 00:10:17.733 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.733 "is_configured": true, 00:10:17.733 "data_offset": 2048, 00:10:17.733 "data_size": 63488 00:10:17.733 }, 00:10:17.733 { 00:10:17.733 "name": "pt2", 00:10:17.733 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.733 "is_configured": true, 00:10:17.733 "data_offset": 2048, 00:10:17.733 "data_size": 63488 00:10:17.733 }, 00:10:17.733 { 00:10:17.733 "name": "pt3", 00:10:17.733 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.733 "is_configured": true, 00:10:17.733 "data_offset": 2048, 00:10:17.733 "data_size": 63488 00:10:17.733 } 00:10:17.733 ] 00:10:17.733 }' 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.733 15:19:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.993 [2024-11-10 15:19:24.280589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.993 "name": "raid_bdev1", 00:10:17.993 "aliases": [ 00:10:17.993 "a40de663-b0fc-4850-ad60-a0b92e08a573" 00:10:17.993 ], 00:10:17.993 "product_name": "Raid Volume", 00:10:17.993 "block_size": 512, 00:10:17.993 "num_blocks": 63488, 00:10:17.993 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:17.993 "assigned_rate_limits": { 00:10:17.993 "rw_ios_per_sec": 0, 00:10:17.993 "rw_mbytes_per_sec": 0, 00:10:17.993 "r_mbytes_per_sec": 0, 00:10:17.993 "w_mbytes_per_sec": 0 00:10:17.993 }, 00:10:17.993 "claimed": false, 00:10:17.993 "zoned": false, 00:10:17.993 "supported_io_types": { 00:10:17.993 "read": true, 00:10:17.993 "write": true, 00:10:17.993 "unmap": false, 00:10:17.993 "flush": false, 00:10:17.993 "reset": true, 00:10:17.993 "nvme_admin": false, 00:10:17.993 "nvme_io": false, 00:10:17.993 "nvme_io_md": false, 00:10:17.993 "write_zeroes": true, 00:10:17.993 "zcopy": false, 00:10:17.993 "get_zone_info": false, 00:10:17.993 "zone_management": false, 00:10:17.993 "zone_append": false, 00:10:17.993 "compare": false, 00:10:17.993 "compare_and_write": false, 00:10:17.993 "abort": false, 00:10:17.993 "seek_hole": false, 00:10:17.993 "seek_data": false, 00:10:17.993 "copy": false, 00:10:17.993 "nvme_iov_md": false 00:10:17.993 }, 00:10:17.993 "memory_domains": [ 00:10:17.993 { 00:10:17.993 "dma_device_id": "system", 00:10:17.993 "dma_device_type": 1 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.993 "dma_device_type": 2 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "dma_device_id": "system", 00:10:17.993 "dma_device_type": 1 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.993 "dma_device_type": 2 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "dma_device_id": "system", 00:10:17.993 "dma_device_type": 1 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.993 "dma_device_type": 2 00:10:17.993 } 00:10:17.993 ], 00:10:17.993 "driver_specific": { 00:10:17.993 "raid": { 00:10:17.993 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:17.993 "strip_size_kb": 0, 00:10:17.993 "state": "online", 00:10:17.993 "raid_level": "raid1", 00:10:17.993 "superblock": true, 00:10:17.993 "num_base_bdevs": 3, 00:10:17.993 "num_base_bdevs_discovered": 3, 00:10:17.993 "num_base_bdevs_operational": 3, 00:10:17.993 "base_bdevs_list": [ 00:10:17.993 { 00:10:17.993 "name": "pt1", 00:10:17.993 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.993 "is_configured": true, 00:10:17.993 "data_offset": 2048, 00:10:17.993 "data_size": 63488 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "name": "pt2", 00:10:17.993 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.993 "is_configured": true, 00:10:17.993 "data_offset": 2048, 00:10:17.993 "data_size": 63488 00:10:17.993 }, 00:10:17.993 { 00:10:17.993 "name": "pt3", 00:10:17.993 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.993 "is_configured": true, 00:10:17.993 "data_offset": 2048, 00:10:17.993 "data_size": 63488 00:10:17.993 } 00:10:17.993 ] 00:10:17.993 } 00:10:17.993 } 00:10:17.993 }' 00:10:17.993 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:18.253 pt2 00:10:18.253 pt3' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.253 [2024-11-10 15:19:24.556622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a40de663-b0fc-4850-ad60-a0b92e08a573 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a40de663-b0fc-4850-ad60-a0b92e08a573 ']' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.253 [2024-11-10 15:19:24.596339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.253 [2024-11-10 15:19:24.596411] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.253 [2024-11-10 15:19:24.596513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.253 [2024-11-10 15:19:24.596631] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.253 [2024-11-10 15:19:24.596682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:18.253 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 [2024-11-10 15:19:24.740437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:18.514 [2024-11-10 15:19:24.742363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:18.514 [2024-11-10 15:19:24.742415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:18.514 [2024-11-10 15:19:24.742464] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:18.514 [2024-11-10 15:19:24.742523] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:18.514 [2024-11-10 15:19:24.742544] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:18.514 [2024-11-10 15:19:24.742559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.514 [2024-11-10 15:19:24.742569] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:18.514 request: 00:10:18.514 { 00:10:18.514 "name": "raid_bdev1", 00:10:18.514 "raid_level": "raid1", 00:10:18.514 "base_bdevs": [ 00:10:18.514 "malloc1", 00:10:18.514 "malloc2", 00:10:18.514 "malloc3" 00:10:18.514 ], 00:10:18.514 "superblock": false, 00:10:18.514 "method": "bdev_raid_create", 00:10:18.514 "req_id": 1 00:10:18.514 } 00:10:18.514 Got JSON-RPC error response 00:10:18.514 response: 00:10:18.514 { 00:10:18.514 "code": -17, 00:10:18.514 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:18.514 } 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 [2024-11-10 15:19:24.808408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.514 [2024-11-10 15:19:24.808517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.514 [2024-11-10 15:19:24.808558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.514 [2024-11-10 15:19:24.808589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.514 [2024-11-10 15:19:24.810919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.514 [2024-11-10 15:19:24.810986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.514 [2024-11-10 15:19:24.811111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.514 [2024-11-10 15:19:24.811180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.514 pt1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.514 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.514 "name": "raid_bdev1", 00:10:18.514 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:18.514 "strip_size_kb": 0, 00:10:18.514 "state": "configuring", 00:10:18.514 "raid_level": "raid1", 00:10:18.514 "superblock": true, 00:10:18.514 "num_base_bdevs": 3, 00:10:18.514 "num_base_bdevs_discovered": 1, 00:10:18.514 "num_base_bdevs_operational": 3, 00:10:18.514 "base_bdevs_list": [ 00:10:18.514 { 00:10:18.514 "name": "pt1", 00:10:18.514 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.515 "is_configured": true, 00:10:18.515 "data_offset": 2048, 00:10:18.515 "data_size": 63488 00:10:18.515 }, 00:10:18.515 { 00:10:18.515 "name": null, 00:10:18.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.515 "is_configured": false, 00:10:18.515 "data_offset": 2048, 00:10:18.515 "data_size": 63488 00:10:18.515 }, 00:10:18.515 { 00:10:18.515 "name": null, 00:10:18.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.515 "is_configured": false, 00:10:18.515 "data_offset": 2048, 00:10:18.515 "data_size": 63488 00:10:18.515 } 00:10:18.515 ] 00:10:18.515 }' 00:10:18.515 15:19:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.515 15:19:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.084 [2024-11-10 15:19:25.280591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.084 [2024-11-10 15:19:25.280710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.084 [2024-11-10 15:19:25.280758] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:19.084 [2024-11-10 15:19:25.280768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.084 [2024-11-10 15:19:25.281194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.084 [2024-11-10 15:19:25.281221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.084 [2024-11-10 15:19:25.281304] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.084 [2024-11-10 15:19:25.281326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.084 pt2 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.084 [2024-11-10 15:19:25.288611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.084 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.085 "name": "raid_bdev1", 00:10:19.085 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:19.085 "strip_size_kb": 0, 00:10:19.085 "state": "configuring", 00:10:19.085 "raid_level": "raid1", 00:10:19.085 "superblock": true, 00:10:19.085 "num_base_bdevs": 3, 00:10:19.085 "num_base_bdevs_discovered": 1, 00:10:19.085 "num_base_bdevs_operational": 3, 00:10:19.085 "base_bdevs_list": [ 00:10:19.085 { 00:10:19.085 "name": "pt1", 00:10:19.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.085 "is_configured": true, 00:10:19.085 "data_offset": 2048, 00:10:19.085 "data_size": 63488 00:10:19.085 }, 00:10:19.085 { 00:10:19.085 "name": null, 00:10:19.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.085 "is_configured": false, 00:10:19.085 "data_offset": 0, 00:10:19.085 "data_size": 63488 00:10:19.085 }, 00:10:19.085 { 00:10:19.085 "name": null, 00:10:19.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.085 "is_configured": false, 00:10:19.085 "data_offset": 2048, 00:10:19.085 "data_size": 63488 00:10:19.085 } 00:10:19.085 ] 00:10:19.085 }' 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.085 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.344 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:19.344 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.344 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:19.344 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.344 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.609 [2024-11-10 15:19:25.708729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:19.609 [2024-11-10 15:19:25.708857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.609 [2024-11-10 15:19:25.708892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:19.609 [2024-11-10 15:19:25.708936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.609 [2024-11-10 15:19:25.709359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.609 [2024-11-10 15:19:25.709426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:19.609 [2024-11-10 15:19:25.709531] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:19.609 [2024-11-10 15:19:25.709596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.609 pt2 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.609 [2024-11-10 15:19:25.720685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.609 [2024-11-10 15:19:25.720788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.609 [2024-11-10 15:19:25.720818] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.609 [2024-11-10 15:19:25.720846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.609 [2024-11-10 15:19:25.721200] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.609 [2024-11-10 15:19:25.721227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.609 [2024-11-10 15:19:25.721282] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:19.609 [2024-11-10 15:19:25.721303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.609 [2024-11-10 15:19:25.721394] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:19.609 [2024-11-10 15:19:25.721405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.609 [2024-11-10 15:19:25.721637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:19.609 [2024-11-10 15:19:25.721756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:19.609 [2024-11-10 15:19:25.721772] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:19.609 [2024-11-10 15:19:25.721884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.609 pt3 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.609 "name": "raid_bdev1", 00:10:19.609 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:19.609 "strip_size_kb": 0, 00:10:19.609 "state": "online", 00:10:19.609 "raid_level": "raid1", 00:10:19.609 "superblock": true, 00:10:19.609 "num_base_bdevs": 3, 00:10:19.609 "num_base_bdevs_discovered": 3, 00:10:19.609 "num_base_bdevs_operational": 3, 00:10:19.609 "base_bdevs_list": [ 00:10:19.609 { 00:10:19.609 "name": "pt1", 00:10:19.609 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.609 "is_configured": true, 00:10:19.609 "data_offset": 2048, 00:10:19.609 "data_size": 63488 00:10:19.609 }, 00:10:19.609 { 00:10:19.609 "name": "pt2", 00:10:19.609 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.609 "is_configured": true, 00:10:19.609 "data_offset": 2048, 00:10:19.609 "data_size": 63488 00:10:19.609 }, 00:10:19.609 { 00:10:19.609 "name": "pt3", 00:10:19.609 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.609 "is_configured": true, 00:10:19.609 "data_offset": 2048, 00:10:19.609 "data_size": 63488 00:10:19.609 } 00:10:19.609 ] 00:10:19.609 }' 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.609 15:19:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 [2024-11-10 15:19:26.149153] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.876 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.876 "name": "raid_bdev1", 00:10:19.876 "aliases": [ 00:10:19.876 "a40de663-b0fc-4850-ad60-a0b92e08a573" 00:10:19.876 ], 00:10:19.876 "product_name": "Raid Volume", 00:10:19.876 "block_size": 512, 00:10:19.876 "num_blocks": 63488, 00:10:19.876 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:19.876 "assigned_rate_limits": { 00:10:19.876 "rw_ios_per_sec": 0, 00:10:19.876 "rw_mbytes_per_sec": 0, 00:10:19.876 "r_mbytes_per_sec": 0, 00:10:19.876 "w_mbytes_per_sec": 0 00:10:19.876 }, 00:10:19.876 "claimed": false, 00:10:19.876 "zoned": false, 00:10:19.876 "supported_io_types": { 00:10:19.876 "read": true, 00:10:19.876 "write": true, 00:10:19.876 "unmap": false, 00:10:19.876 "flush": false, 00:10:19.876 "reset": true, 00:10:19.876 "nvme_admin": false, 00:10:19.876 "nvme_io": false, 00:10:19.876 "nvme_io_md": false, 00:10:19.876 "write_zeroes": true, 00:10:19.876 "zcopy": false, 00:10:19.876 "get_zone_info": false, 00:10:19.876 "zone_management": false, 00:10:19.876 "zone_append": false, 00:10:19.876 "compare": false, 00:10:19.876 "compare_and_write": false, 00:10:19.876 "abort": false, 00:10:19.876 "seek_hole": false, 00:10:19.876 "seek_data": false, 00:10:19.876 "copy": false, 00:10:19.876 "nvme_iov_md": false 00:10:19.876 }, 00:10:19.876 "memory_domains": [ 00:10:19.876 { 00:10:19.876 "dma_device_id": "system", 00:10:19.876 "dma_device_type": 1 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.876 "dma_device_type": 2 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "dma_device_id": "system", 00:10:19.876 "dma_device_type": 1 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.876 "dma_device_type": 2 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "dma_device_id": "system", 00:10:19.877 "dma_device_type": 1 00:10:19.877 }, 00:10:19.877 { 00:10:19.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.877 "dma_device_type": 2 00:10:19.877 } 00:10:19.877 ], 00:10:19.877 "driver_specific": { 00:10:19.877 "raid": { 00:10:19.877 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:19.877 "strip_size_kb": 0, 00:10:19.877 "state": "online", 00:10:19.877 "raid_level": "raid1", 00:10:19.877 "superblock": true, 00:10:19.877 "num_base_bdevs": 3, 00:10:19.877 "num_base_bdevs_discovered": 3, 00:10:19.877 "num_base_bdevs_operational": 3, 00:10:19.877 "base_bdevs_list": [ 00:10:19.877 { 00:10:19.877 "name": "pt1", 00:10:19.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.877 "is_configured": true, 00:10:19.877 "data_offset": 2048, 00:10:19.877 "data_size": 63488 00:10:19.877 }, 00:10:19.877 { 00:10:19.877 "name": "pt2", 00:10:19.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.877 "is_configured": true, 00:10:19.877 "data_offset": 2048, 00:10:19.877 "data_size": 63488 00:10:19.877 }, 00:10:19.877 { 00:10:19.877 "name": "pt3", 00:10:19.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.877 "is_configured": true, 00:10:19.877 "data_offset": 2048, 00:10:19.877 "data_size": 63488 00:10:19.877 } 00:10:19.877 ] 00:10:19.877 } 00:10:19.877 } 00:10:19.877 }' 00:10:19.877 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:20.136 pt2 00:10:20.136 pt3' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.136 [2024-11-10 15:19:26.425209] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a40de663-b0fc-4850-ad60-a0b92e08a573 '!=' a40de663-b0fc-4850-ad60-a0b92e08a573 ']' 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:20.136 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.137 [2024-11-10 15:19:26.456947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.137 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.397 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.397 "name": "raid_bdev1", 00:10:20.397 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:20.397 "strip_size_kb": 0, 00:10:20.397 "state": "online", 00:10:20.397 "raid_level": "raid1", 00:10:20.397 "superblock": true, 00:10:20.397 "num_base_bdevs": 3, 00:10:20.397 "num_base_bdevs_discovered": 2, 00:10:20.397 "num_base_bdevs_operational": 2, 00:10:20.397 "base_bdevs_list": [ 00:10:20.397 { 00:10:20.397 "name": null, 00:10:20.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.397 "is_configured": false, 00:10:20.397 "data_offset": 0, 00:10:20.397 "data_size": 63488 00:10:20.397 }, 00:10:20.397 { 00:10:20.397 "name": "pt2", 00:10:20.397 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.397 "is_configured": true, 00:10:20.397 "data_offset": 2048, 00:10:20.397 "data_size": 63488 00:10:20.397 }, 00:10:20.397 { 00:10:20.397 "name": "pt3", 00:10:20.397 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.397 "is_configured": true, 00:10:20.397 "data_offset": 2048, 00:10:20.397 "data_size": 63488 00:10:20.397 } 00:10:20.397 ] 00:10:20.397 }' 00:10:20.397 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.397 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.656 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.656 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.656 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.656 [2024-11-10 15:19:26.881041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.657 [2024-11-10 15:19:26.881113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.657 [2024-11-10 15:19:26.881214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.657 [2024-11-10 15:19:26.881287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.657 [2024-11-10 15:19:26.881333] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.657 [2024-11-10 15:19:26.965043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.657 [2024-11-10 15:19:26.965148] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.657 [2024-11-10 15:19:26.965179] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:20.657 [2024-11-10 15:19:26.965207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.657 [2024-11-10 15:19:26.967366] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.657 [2024-11-10 15:19:26.967441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.657 [2024-11-10 15:19:26.967530] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.657 [2024-11-10 15:19:26.967607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.657 pt2 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.657 15:19:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.916 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.916 "name": "raid_bdev1", 00:10:20.916 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:20.916 "strip_size_kb": 0, 00:10:20.916 "state": "configuring", 00:10:20.916 "raid_level": "raid1", 00:10:20.916 "superblock": true, 00:10:20.916 "num_base_bdevs": 3, 00:10:20.916 "num_base_bdevs_discovered": 1, 00:10:20.916 "num_base_bdevs_operational": 2, 00:10:20.916 "base_bdevs_list": [ 00:10:20.916 { 00:10:20.916 "name": null, 00:10:20.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.916 "is_configured": false, 00:10:20.916 "data_offset": 2048, 00:10:20.916 "data_size": 63488 00:10:20.916 }, 00:10:20.916 { 00:10:20.916 "name": "pt2", 00:10:20.916 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.916 "is_configured": true, 00:10:20.916 "data_offset": 2048, 00:10:20.916 "data_size": 63488 00:10:20.916 }, 00:10:20.916 { 00:10:20.916 "name": null, 00:10:20.916 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.916 "is_configured": false, 00:10:20.916 "data_offset": 2048, 00:10:20.916 "data_size": 63488 00:10:20.916 } 00:10:20.916 ] 00:10:20.916 }' 00:10:20.916 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.916 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.175 [2024-11-10 15:19:27.417217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.175 [2024-11-10 15:19:27.417308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.175 [2024-11-10 15:19:27.417332] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:21.175 [2024-11-10 15:19:27.417345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.175 [2024-11-10 15:19:27.417743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.175 [2024-11-10 15:19:27.417767] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.175 [2024-11-10 15:19:27.417842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.175 [2024-11-10 15:19:27.417873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.175 [2024-11-10 15:19:27.417968] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:21.175 [2024-11-10 15:19:27.417984] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.175 [2024-11-10 15:19:27.418241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:21.175 [2024-11-10 15:19:27.418359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:21.175 [2024-11-10 15:19:27.418368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:21.175 [2024-11-10 15:19:27.418472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.175 pt3 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.175 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.176 "name": "raid_bdev1", 00:10:21.176 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:21.176 "strip_size_kb": 0, 00:10:21.176 "state": "online", 00:10:21.176 "raid_level": "raid1", 00:10:21.176 "superblock": true, 00:10:21.176 "num_base_bdevs": 3, 00:10:21.176 "num_base_bdevs_discovered": 2, 00:10:21.176 "num_base_bdevs_operational": 2, 00:10:21.176 "base_bdevs_list": [ 00:10:21.176 { 00:10:21.176 "name": null, 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.176 "is_configured": false, 00:10:21.176 "data_offset": 2048, 00:10:21.176 "data_size": 63488 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "name": "pt2", 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.176 "is_configured": true, 00:10:21.176 "data_offset": 2048, 00:10:21.176 "data_size": 63488 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "name": "pt3", 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.176 "is_configured": true, 00:10:21.176 "data_offset": 2048, 00:10:21.176 "data_size": 63488 00:10:21.176 } 00:10:21.176 ] 00:10:21.176 }' 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.176 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 [2024-11-10 15:19:27.865322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.745 [2024-11-10 15:19:27.865357] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.745 [2024-11-10 15:19:27.865439] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.745 [2024-11-10 15:19:27.865499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.745 [2024-11-10 15:19:27.865509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 [2024-11-10 15:19:27.929313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.745 [2024-11-10 15:19:27.929375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.745 [2024-11-10 15:19:27.929393] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:21.745 [2024-11-10 15:19:27.929401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.745 [2024-11-10 15:19:27.931566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.745 [2024-11-10 15:19:27.931647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.745 [2024-11-10 15:19:27.931733] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.745 [2024-11-10 15:19:27.931775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.745 [2024-11-10 15:19:27.931888] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:21.745 [2024-11-10 15:19:27.931905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.745 [2024-11-10 15:19:27.931922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:10:21.745 [2024-11-10 15:19:27.931953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.745 pt1 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.745 "name": "raid_bdev1", 00:10:21.745 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:21.745 "strip_size_kb": 0, 00:10:21.745 "state": "configuring", 00:10:21.745 "raid_level": "raid1", 00:10:21.745 "superblock": true, 00:10:21.745 "num_base_bdevs": 3, 00:10:21.745 "num_base_bdevs_discovered": 1, 00:10:21.745 "num_base_bdevs_operational": 2, 00:10:21.745 "base_bdevs_list": [ 00:10:21.745 { 00:10:21.745 "name": null, 00:10:21.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.745 "is_configured": false, 00:10:21.745 "data_offset": 2048, 00:10:21.745 "data_size": 63488 00:10:21.745 }, 00:10:21.745 { 00:10:21.745 "name": "pt2", 00:10:21.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.745 "is_configured": true, 00:10:21.745 "data_offset": 2048, 00:10:21.745 "data_size": 63488 00:10:21.745 }, 00:10:21.745 { 00:10:21.745 "name": null, 00:10:21.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.745 "is_configured": false, 00:10:21.745 "data_offset": 2048, 00:10:21.745 "data_size": 63488 00:10:21.745 } 00:10:21.745 ] 00:10:21.745 }' 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.745 15:19:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.315 [2024-11-10 15:19:28.437491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:22.315 [2024-11-10 15:19:28.437576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:22.315 [2024-11-10 15:19:28.437601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:22.315 [2024-11-10 15:19:28.437612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:22.315 [2024-11-10 15:19:28.438028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:22.315 [2024-11-10 15:19:28.438122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:22.315 [2024-11-10 15:19:28.438223] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:22.315 [2024-11-10 15:19:28.438305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:22.315 [2024-11-10 15:19:28.438457] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:22.315 [2024-11-10 15:19:28.438499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:22.315 [2024-11-10 15:19:28.438773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:10:22.315 [2024-11-10 15:19:28.438959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:22.315 [2024-11-10 15:19:28.439017] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:22.315 [2024-11-10 15:19:28.439181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:22.315 pt3 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.315 "name": "raid_bdev1", 00:10:22.315 "uuid": "a40de663-b0fc-4850-ad60-a0b92e08a573", 00:10:22.315 "strip_size_kb": 0, 00:10:22.315 "state": "online", 00:10:22.315 "raid_level": "raid1", 00:10:22.315 "superblock": true, 00:10:22.315 "num_base_bdevs": 3, 00:10:22.315 "num_base_bdevs_discovered": 2, 00:10:22.315 "num_base_bdevs_operational": 2, 00:10:22.315 "base_bdevs_list": [ 00:10:22.315 { 00:10:22.315 "name": null, 00:10:22.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.315 "is_configured": false, 00:10:22.315 "data_offset": 2048, 00:10:22.315 "data_size": 63488 00:10:22.315 }, 00:10:22.315 { 00:10:22.315 "name": "pt2", 00:10:22.315 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:22.315 "is_configured": true, 00:10:22.315 "data_offset": 2048, 00:10:22.315 "data_size": 63488 00:10:22.315 }, 00:10:22.315 { 00:10:22.315 "name": "pt3", 00:10:22.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:22.315 "is_configured": true, 00:10:22.315 "data_offset": 2048, 00:10:22.315 "data_size": 63488 00:10:22.315 } 00:10:22.315 ] 00:10:22.315 }' 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.315 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.574 [2024-11-10 15:19:28.901885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a40de663-b0fc-4850-ad60-a0b92e08a573 '!=' a40de663-b0fc-4850-ad60-a0b92e08a573 ']' 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81045 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 81045 ']' 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 81045 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:22.574 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:22.575 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81045 00:10:22.834 killing process with pid 81045 00:10:22.834 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:22.834 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:22.834 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81045' 00:10:22.834 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 81045 00:10:22.834 15:19:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 81045 00:10:22.834 [2024-11-10 15:19:28.963761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.834 [2024-11-10 15:19:28.963862] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.834 [2024-11-10 15:19:28.963928] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.834 [2024-11-10 15:19:28.963941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:22.834 [2024-11-10 15:19:28.997945] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.094 15:19:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.094 ************************************ 00:10:23.094 END TEST raid_superblock_test 00:10:23.094 ************************************ 00:10:23.094 00:10:23.094 real 0m6.379s 00:10:23.094 user 0m10.742s 00:10:23.094 sys 0m1.313s 00:10:23.094 15:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:23.094 15:19:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.094 15:19:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:23.094 15:19:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:23.094 15:19:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:23.094 15:19:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.094 ************************************ 00:10:23.094 START TEST raid_read_error_test 00:10:23.094 ************************************ 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 read 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ZPiZ1TamAJ 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81480 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.094 15:19:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81480 00:10:23.095 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 81480 ']' 00:10:23.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.095 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.095 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:23.095 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.095 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:23.095 15:19:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.095 [2024-11-10 15:19:29.377244] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:23.095 [2024-11-10 15:19:29.377368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81480 ] 00:10:23.355 [2024-11-10 15:19:29.509960] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:23.355 [2024-11-10 15:19:29.549815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.355 [2024-11-10 15:19:29.575141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.355 [2024-11-10 15:19:29.617768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.355 [2024-11-10 15:19:29.617887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.924 BaseBdev1_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.924 true 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.924 [2024-11-10 15:19:30.225273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.924 [2024-11-10 15:19:30.225341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.924 [2024-11-10 15:19:30.225373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:23.924 [2024-11-10 15:19:30.225386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.924 [2024-11-10 15:19:30.227622] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.924 [2024-11-10 15:19:30.227662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.924 BaseBdev1 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.924 BaseBdev2_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.924 true 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.924 [2024-11-10 15:19:30.261920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.924 [2024-11-10 15:19:30.262024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.924 [2024-11-10 15:19:30.262046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:23.924 [2024-11-10 15:19:30.262056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.924 [2024-11-10 15:19:30.264147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.924 [2024-11-10 15:19:30.264202] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.924 BaseBdev2 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.924 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.184 BaseBdev3_malloc 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.184 true 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.184 [2024-11-10 15:19:30.302599] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.184 [2024-11-10 15:19:30.302699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.184 [2024-11-10 15:19:30.302750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.184 [2024-11-10 15:19:30.302781] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.184 [2024-11-10 15:19:30.304955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.184 [2024-11-10 15:19:30.305047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.184 BaseBdev3 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.184 [2024-11-10 15:19:30.314641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.184 [2024-11-10 15:19:30.316548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.184 [2024-11-10 15:19:30.316683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.184 [2024-11-10 15:19:30.316889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:24.184 [2024-11-10 15:19:30.316936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.184 [2024-11-10 15:19:30.317207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:10:24.184 [2024-11-10 15:19:30.317396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:24.184 [2024-11-10 15:19:30.317444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:24.184 [2024-11-10 15:19:30.317614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.184 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.184 "name": "raid_bdev1", 00:10:24.184 "uuid": "1625a728-ad31-4b60-8281-1802c4022639", 00:10:24.184 "strip_size_kb": 0, 00:10:24.184 "state": "online", 00:10:24.184 "raid_level": "raid1", 00:10:24.184 "superblock": true, 00:10:24.184 "num_base_bdevs": 3, 00:10:24.184 "num_base_bdevs_discovered": 3, 00:10:24.184 "num_base_bdevs_operational": 3, 00:10:24.184 "base_bdevs_list": [ 00:10:24.184 { 00:10:24.184 "name": "BaseBdev1", 00:10:24.184 "uuid": "eb549cd4-3fa5-5f2b-9668-a41502a9702f", 00:10:24.184 "is_configured": true, 00:10:24.184 "data_offset": 2048, 00:10:24.184 "data_size": 63488 00:10:24.184 }, 00:10:24.184 { 00:10:24.184 "name": "BaseBdev2", 00:10:24.185 "uuid": "0b8ca1ab-28f7-5bdd-a447-6365b5d116e6", 00:10:24.185 "is_configured": true, 00:10:24.185 "data_offset": 2048, 00:10:24.185 "data_size": 63488 00:10:24.185 }, 00:10:24.185 { 00:10:24.185 "name": "BaseBdev3", 00:10:24.185 "uuid": "e1ea86be-ba10-577f-baa9-7bb84dbcd030", 00:10:24.185 "is_configured": true, 00:10:24.185 "data_offset": 2048, 00:10:24.185 "data_size": 63488 00:10:24.185 } 00:10:24.185 ] 00:10:24.185 }' 00:10:24.185 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.185 15:19:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.444 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.444 15:19:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.703 [2024-11-10 15:19:30.871204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:10:25.641 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.642 "name": "raid_bdev1", 00:10:25.642 "uuid": "1625a728-ad31-4b60-8281-1802c4022639", 00:10:25.642 "strip_size_kb": 0, 00:10:25.642 "state": "online", 00:10:25.642 "raid_level": "raid1", 00:10:25.642 "superblock": true, 00:10:25.642 "num_base_bdevs": 3, 00:10:25.642 "num_base_bdevs_discovered": 3, 00:10:25.642 "num_base_bdevs_operational": 3, 00:10:25.642 "base_bdevs_list": [ 00:10:25.642 { 00:10:25.642 "name": "BaseBdev1", 00:10:25.642 "uuid": "eb549cd4-3fa5-5f2b-9668-a41502a9702f", 00:10:25.642 "is_configured": true, 00:10:25.642 "data_offset": 2048, 00:10:25.642 "data_size": 63488 00:10:25.642 }, 00:10:25.642 { 00:10:25.642 "name": "BaseBdev2", 00:10:25.642 "uuid": "0b8ca1ab-28f7-5bdd-a447-6365b5d116e6", 00:10:25.642 "is_configured": true, 00:10:25.642 "data_offset": 2048, 00:10:25.642 "data_size": 63488 00:10:25.642 }, 00:10:25.642 { 00:10:25.642 "name": "BaseBdev3", 00:10:25.642 "uuid": "e1ea86be-ba10-577f-baa9-7bb84dbcd030", 00:10:25.642 "is_configured": true, 00:10:25.642 "data_offset": 2048, 00:10:25.642 "data_size": 63488 00:10:25.642 } 00:10:25.642 ] 00:10:25.642 }' 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.642 15:19:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.901 [2024-11-10 15:19:32.245601] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.901 [2024-11-10 15:19:32.245690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.901 [2024-11-10 15:19:32.248133] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.901 [2024-11-10 15:19:32.248225] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.901 [2024-11-10 15:19:32.248345] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.901 [2024-11-10 15:19:32.248387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81480 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 81480 ']' 00:10:25.901 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 81480 00:10:25.901 { 00:10:25.901 "results": [ 00:10:25.901 { 00:10:25.901 "job": "raid_bdev1", 00:10:25.901 "core_mask": "0x1", 00:10:25.901 "workload": "randrw", 00:10:25.901 "percentage": 50, 00:10:25.901 "status": "finished", 00:10:25.901 "queue_depth": 1, 00:10:25.901 "io_size": 131072, 00:10:25.901 "runtime": 1.372332, 00:10:25.902 "iops": 14539.484614510191, 00:10:25.902 "mibps": 1817.4355768137739, 00:10:25.902 "io_failed": 0, 00:10:25.902 "io_timeout": 0, 00:10:25.902 "avg_latency_us": 66.25151394629468, 00:10:25.902 "min_latency_us": 22.313257212586073, 00:10:25.902 "max_latency_us": 1349.5057962172057 00:10:25.902 } 00:10:25.902 ], 00:10:25.902 "core_count": 1 00:10:25.902 } 00:10:25.902 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:25.902 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:25.902 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81480 00:10:26.161 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:26.161 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:26.161 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81480' 00:10:26.161 killing process with pid 81480 00:10:26.161 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 81480 00:10:26.161 [2024-11-10 15:19:32.292274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.161 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 81480 00:10:26.161 [2024-11-10 15:19:32.318620] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ZPiZ1TamAJ 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:26.421 ************************************ 00:10:26.421 END TEST raid_read_error_test 00:10:26.421 ************************************ 00:10:26.421 00:10:26.421 real 0m3.265s 00:10:26.421 user 0m4.151s 00:10:26.421 sys 0m0.534s 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:26.421 15:19:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.421 15:19:32 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:26.421 15:19:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:26.421 15:19:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:26.421 15:19:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:26.422 ************************************ 00:10:26.422 START TEST raid_write_error_test 00:10:26.422 ************************************ 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 3 write 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0PmOZR34yr 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81609 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81609 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 81609 ']' 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:26.422 15:19:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.422 [2024-11-10 15:19:32.710480] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:26.422 [2024-11-10 15:19:32.710599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81609 ] 00:10:26.681 [2024-11-10 15:19:32.843120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:26.681 [2024-11-10 15:19:32.881956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.681 [2024-11-10 15:19:32.907726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.681 [2024-11-10 15:19:32.949984] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.681 [2024-11-10 15:19:32.950028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.255 BaseBdev1_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.255 true 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.255 [2024-11-10 15:19:33.577340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:27.255 [2024-11-10 15:19:33.577397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.255 [2024-11-10 15:19:33.577416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:27.255 [2024-11-10 15:19:33.577436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.255 [2024-11-10 15:19:33.579612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.255 [2024-11-10 15:19:33.579651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:27.255 BaseBdev1 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.255 BaseBdev2_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.255 true 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.255 [2024-11-10 15:19:33.605939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:27.255 [2024-11-10 15:19:33.606043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.255 [2024-11-10 15:19:33.606065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:27.255 [2024-11-10 15:19:33.606075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.255 [2024-11-10 15:19:33.608352] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.255 [2024-11-10 15:19:33.608391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:27.255 BaseBdev2 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.255 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.522 BaseBdev3_malloc 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.522 true 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.522 [2024-11-10 15:19:33.642447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:27.522 [2024-11-10 15:19:33.642499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:27.522 [2024-11-10 15:19:33.642532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:27.522 [2024-11-10 15:19:33.642543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:27.522 [2024-11-10 15:19:33.644758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:27.522 [2024-11-10 15:19:33.644800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:27.522 BaseBdev3 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.522 [2024-11-10 15:19:33.654497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.522 [2024-11-10 15:19:33.656400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.522 [2024-11-10 15:19:33.656534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:27.522 [2024-11-10 15:19:33.656723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:27.522 [2024-11-10 15:19:33.656736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:27.522 [2024-11-10 15:19:33.656982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006970 00:10:27.522 [2024-11-10 15:19:33.657152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:27.522 [2024-11-10 15:19:33.657165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:27.522 [2024-11-10 15:19:33.657286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.522 "name": "raid_bdev1", 00:10:27.522 "uuid": "ab441287-669e-4fcf-9fac-dfadc28c96b7", 00:10:27.522 "strip_size_kb": 0, 00:10:27.522 "state": "online", 00:10:27.522 "raid_level": "raid1", 00:10:27.522 "superblock": true, 00:10:27.522 "num_base_bdevs": 3, 00:10:27.522 "num_base_bdevs_discovered": 3, 00:10:27.522 "num_base_bdevs_operational": 3, 00:10:27.522 "base_bdevs_list": [ 00:10:27.522 { 00:10:27.522 "name": "BaseBdev1", 00:10:27.522 "uuid": "e3be3faa-ecbf-5bb3-b414-4d92ce8fcb14", 00:10:27.522 "is_configured": true, 00:10:27.522 "data_offset": 2048, 00:10:27.522 "data_size": 63488 00:10:27.522 }, 00:10:27.522 { 00:10:27.522 "name": "BaseBdev2", 00:10:27.522 "uuid": "27925316-3e6f-5697-977e-b86e0ba4f43f", 00:10:27.522 "is_configured": true, 00:10:27.522 "data_offset": 2048, 00:10:27.522 "data_size": 63488 00:10:27.522 }, 00:10:27.522 { 00:10:27.522 "name": "BaseBdev3", 00:10:27.522 "uuid": "cd8db421-7a3c-54dd-bfe9-d485d2ab0577", 00:10:27.522 "is_configured": true, 00:10:27.522 "data_offset": 2048, 00:10:27.522 "data_size": 63488 00:10:27.522 } 00:10:27.522 ] 00:10:27.522 }' 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.522 15:19:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.782 15:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:27.782 15:19:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:28.041 [2024-11-10 15:19:34.199055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006b10 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.979 [2024-11-10 15:19:35.116029] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:28.979 [2024-11-10 15:19:35.116163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:28.979 [2024-11-10 15:19:35.116403] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006b10 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:28.979 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.980 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.980 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.980 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.980 "name": "raid_bdev1", 00:10:28.980 "uuid": "ab441287-669e-4fcf-9fac-dfadc28c96b7", 00:10:28.980 "strip_size_kb": 0, 00:10:28.980 "state": "online", 00:10:28.980 "raid_level": "raid1", 00:10:28.980 "superblock": true, 00:10:28.980 "num_base_bdevs": 3, 00:10:28.980 "num_base_bdevs_discovered": 2, 00:10:28.980 "num_base_bdevs_operational": 2, 00:10:28.980 "base_bdevs_list": [ 00:10:28.980 { 00:10:28.980 "name": null, 00:10:28.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.980 "is_configured": false, 00:10:28.980 "data_offset": 0, 00:10:28.980 "data_size": 63488 00:10:28.980 }, 00:10:28.980 { 00:10:28.980 "name": "BaseBdev2", 00:10:28.980 "uuid": "27925316-3e6f-5697-977e-b86e0ba4f43f", 00:10:28.980 "is_configured": true, 00:10:28.980 "data_offset": 2048, 00:10:28.980 "data_size": 63488 00:10:28.980 }, 00:10:28.980 { 00:10:28.980 "name": "BaseBdev3", 00:10:28.980 "uuid": "cd8db421-7a3c-54dd-bfe9-d485d2ab0577", 00:10:28.980 "is_configured": true, 00:10:28.980 "data_offset": 2048, 00:10:28.980 "data_size": 63488 00:10:28.980 } 00:10:28.980 ] 00:10:28.980 }' 00:10:28.980 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.980 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.239 [2024-11-10 15:19:35.583320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:29.239 [2024-11-10 15:19:35.583356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.239 [2024-11-10 15:19:35.585946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.239 [2024-11-10 15:19:35.585997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.239 [2024-11-10 15:19:35.586082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.239 [2024-11-10 15:19:35.586095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:29.239 { 00:10:29.239 "results": [ 00:10:29.239 { 00:10:29.239 "job": "raid_bdev1", 00:10:29.239 "core_mask": "0x1", 00:10:29.239 "workload": "randrw", 00:10:29.239 "percentage": 50, 00:10:29.239 "status": "finished", 00:10:29.239 "queue_depth": 1, 00:10:29.239 "io_size": 131072, 00:10:29.239 "runtime": 1.382057, 00:10:29.239 "iops": 16009.469942267215, 00:10:29.239 "mibps": 2001.1837427834018, 00:10:29.239 "io_failed": 0, 00:10:29.239 "io_timeout": 0, 00:10:29.239 "avg_latency_us": 59.86779976858762, 00:10:29.239 "min_latency_us": 22.09012464046021, 00:10:29.239 "max_latency_us": 1456.6094308376187 00:10:29.239 } 00:10:29.239 ], 00:10:29.239 "core_count": 1 00:10:29.239 } 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81609 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 81609 ']' 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 81609 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:29.239 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81609 00:10:29.499 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:29.499 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:29.499 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81609' 00:10:29.499 killing process with pid 81609 00:10:29.499 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 81609 00:10:29.499 [2024-11-10 15:19:35.631837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.499 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 81609 00:10:29.499 [2024-11-10 15:19:35.658455] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0PmOZR34yr 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:29.758 ************************************ 00:10:29.758 END TEST raid_write_error_test 00:10:29.758 ************************************ 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:29.758 00:10:29.758 real 0m3.267s 00:10:29.758 user 0m4.156s 00:10:29.758 sys 0m0.537s 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:29.758 15:19:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.758 15:19:35 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:29.758 15:19:35 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:29.758 15:19:35 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:29.758 15:19:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:29.758 15:19:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:29.758 15:19:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:29.758 ************************************ 00:10:29.758 START TEST raid_state_function_test 00:10:29.758 ************************************ 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 false 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:29.758 Process raid pid: 81736 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81736 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81736' 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81736 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 81736 ']' 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:29.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:29.758 15:19:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.758 [2024-11-10 15:19:36.046287] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:29.758 [2024-11-10 15:19:36.046515] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:30.017 [2024-11-10 15:19:36.178632] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:30.017 [2024-11-10 15:19:36.208251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:30.017 [2024-11-10 15:19:36.233718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:30.017 [2024-11-10 15:19:36.276612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.017 [2024-11-10 15:19:36.276698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.587 [2024-11-10 15:19:36.871744] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.587 [2024-11-10 15:19:36.871803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.587 [2024-11-10 15:19:36.871815] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:30.587 [2024-11-10 15:19:36.871823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:30.587 [2024-11-10 15:19:36.871833] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:30.587 [2024-11-10 15:19:36.871839] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:30.587 [2024-11-10 15:19:36.871847] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:30.587 [2024-11-10 15:19:36.871854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.587 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.588 "name": "Existed_Raid", 00:10:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.588 "strip_size_kb": 64, 00:10:30.588 "state": "configuring", 00:10:30.588 "raid_level": "raid0", 00:10:30.588 "superblock": false, 00:10:30.588 "num_base_bdevs": 4, 00:10:30.588 "num_base_bdevs_discovered": 0, 00:10:30.588 "num_base_bdevs_operational": 4, 00:10:30.588 "base_bdevs_list": [ 00:10:30.588 { 00:10:30.588 "name": "BaseBdev1", 00:10:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.588 "is_configured": false, 00:10:30.588 "data_offset": 0, 00:10:30.588 "data_size": 0 00:10:30.588 }, 00:10:30.588 { 00:10:30.588 "name": "BaseBdev2", 00:10:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.588 "is_configured": false, 00:10:30.588 "data_offset": 0, 00:10:30.588 "data_size": 0 00:10:30.588 }, 00:10:30.588 { 00:10:30.588 "name": "BaseBdev3", 00:10:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.588 "is_configured": false, 00:10:30.588 "data_offset": 0, 00:10:30.588 "data_size": 0 00:10:30.588 }, 00:10:30.588 { 00:10:30.588 "name": "BaseBdev4", 00:10:30.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.588 "is_configured": false, 00:10:30.588 "data_offset": 0, 00:10:30.588 "data_size": 0 00:10:30.588 } 00:10:30.588 ] 00:10:30.588 }' 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.588 15:19:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.155 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.155 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.155 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.155 [2024-11-10 15:19:37.295778] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.155 [2024-11-10 15:19:37.295895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:31.155 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.156 [2024-11-10 15:19:37.307808] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.156 [2024-11-10 15:19:37.307892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.156 [2024-11-10 15:19:37.307923] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.156 [2024-11-10 15:19:37.307945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.156 [2024-11-10 15:19:37.307965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.156 [2024-11-10 15:19:37.307985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.156 [2024-11-10 15:19:37.308005] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.156 [2024-11-10 15:19:37.308040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.156 [2024-11-10 15:19:37.328737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.156 BaseBdev1 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.156 [ 00:10:31.156 { 00:10:31.156 "name": "BaseBdev1", 00:10:31.156 "aliases": [ 00:10:31.156 "1a020fdb-102a-4f66-b054-6cd8b880601d" 00:10:31.156 ], 00:10:31.156 "product_name": "Malloc disk", 00:10:31.156 "block_size": 512, 00:10:31.156 "num_blocks": 65536, 00:10:31.156 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:31.156 "assigned_rate_limits": { 00:10:31.156 "rw_ios_per_sec": 0, 00:10:31.156 "rw_mbytes_per_sec": 0, 00:10:31.156 "r_mbytes_per_sec": 0, 00:10:31.156 "w_mbytes_per_sec": 0 00:10:31.156 }, 00:10:31.156 "claimed": true, 00:10:31.156 "claim_type": "exclusive_write", 00:10:31.156 "zoned": false, 00:10:31.156 "supported_io_types": { 00:10:31.156 "read": true, 00:10:31.156 "write": true, 00:10:31.156 "unmap": true, 00:10:31.156 "flush": true, 00:10:31.156 "reset": true, 00:10:31.156 "nvme_admin": false, 00:10:31.156 "nvme_io": false, 00:10:31.156 "nvme_io_md": false, 00:10:31.156 "write_zeroes": true, 00:10:31.156 "zcopy": true, 00:10:31.156 "get_zone_info": false, 00:10:31.156 "zone_management": false, 00:10:31.156 "zone_append": false, 00:10:31.156 "compare": false, 00:10:31.156 "compare_and_write": false, 00:10:31.156 "abort": true, 00:10:31.156 "seek_hole": false, 00:10:31.156 "seek_data": false, 00:10:31.156 "copy": true, 00:10:31.156 "nvme_iov_md": false 00:10:31.156 }, 00:10:31.156 "memory_domains": [ 00:10:31.156 { 00:10:31.156 "dma_device_id": "system", 00:10:31.156 "dma_device_type": 1 00:10:31.156 }, 00:10:31.156 { 00:10:31.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.156 "dma_device_type": 2 00:10:31.156 } 00:10:31.156 ], 00:10:31.156 "driver_specific": {} 00:10:31.156 } 00:10:31.156 ] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.156 "name": "Existed_Raid", 00:10:31.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.156 "strip_size_kb": 64, 00:10:31.156 "state": "configuring", 00:10:31.156 "raid_level": "raid0", 00:10:31.156 "superblock": false, 00:10:31.156 "num_base_bdevs": 4, 00:10:31.156 "num_base_bdevs_discovered": 1, 00:10:31.156 "num_base_bdevs_operational": 4, 00:10:31.156 "base_bdevs_list": [ 00:10:31.156 { 00:10:31.156 "name": "BaseBdev1", 00:10:31.156 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:31.156 "is_configured": true, 00:10:31.156 "data_offset": 0, 00:10:31.156 "data_size": 65536 00:10:31.156 }, 00:10:31.156 { 00:10:31.156 "name": "BaseBdev2", 00:10:31.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.156 "is_configured": false, 00:10:31.156 "data_offset": 0, 00:10:31.156 "data_size": 0 00:10:31.156 }, 00:10:31.156 { 00:10:31.156 "name": "BaseBdev3", 00:10:31.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.156 "is_configured": false, 00:10:31.156 "data_offset": 0, 00:10:31.156 "data_size": 0 00:10:31.156 }, 00:10:31.156 { 00:10:31.156 "name": "BaseBdev4", 00:10:31.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.156 "is_configured": false, 00:10:31.156 "data_offset": 0, 00:10:31.156 "data_size": 0 00:10:31.156 } 00:10:31.156 ] 00:10:31.156 }' 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.156 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.724 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:31.724 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.724 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.724 [2024-11-10 15:19:37.808928] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:31.724 [2024-11-10 15:19:37.809003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:31.724 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.724 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.725 [2024-11-10 15:19:37.816962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:31.725 [2024-11-10 15:19:37.818870] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.725 [2024-11-10 15:19:37.818913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.725 [2024-11-10 15:19:37.818924] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.725 [2024-11-10 15:19:37.818932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.725 [2024-11-10 15:19:37.818939] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:31.725 [2024-11-10 15:19:37.818946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.725 "name": "Existed_Raid", 00:10:31.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.725 "strip_size_kb": 64, 00:10:31.725 "state": "configuring", 00:10:31.725 "raid_level": "raid0", 00:10:31.725 "superblock": false, 00:10:31.725 "num_base_bdevs": 4, 00:10:31.725 "num_base_bdevs_discovered": 1, 00:10:31.725 "num_base_bdevs_operational": 4, 00:10:31.725 "base_bdevs_list": [ 00:10:31.725 { 00:10:31.725 "name": "BaseBdev1", 00:10:31.725 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:31.725 "is_configured": true, 00:10:31.725 "data_offset": 0, 00:10:31.725 "data_size": 65536 00:10:31.725 }, 00:10:31.725 { 00:10:31.725 "name": "BaseBdev2", 00:10:31.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.725 "is_configured": false, 00:10:31.725 "data_offset": 0, 00:10:31.725 "data_size": 0 00:10:31.725 }, 00:10:31.725 { 00:10:31.725 "name": "BaseBdev3", 00:10:31.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.725 "is_configured": false, 00:10:31.725 "data_offset": 0, 00:10:31.725 "data_size": 0 00:10:31.725 }, 00:10:31.725 { 00:10:31.725 "name": "BaseBdev4", 00:10:31.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.725 "is_configured": false, 00:10:31.725 "data_offset": 0, 00:10:31.725 "data_size": 0 00:10:31.725 } 00:10:31.725 ] 00:10:31.725 }' 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.725 15:19:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 [2024-11-10 15:19:38.219984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.984 BaseBdev2 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.984 [ 00:10:31.984 { 00:10:31.984 "name": "BaseBdev2", 00:10:31.984 "aliases": [ 00:10:31.984 "882bc79c-ff92-4729-be9c-24365132af3b" 00:10:31.984 ], 00:10:31.984 "product_name": "Malloc disk", 00:10:31.984 "block_size": 512, 00:10:31.984 "num_blocks": 65536, 00:10:31.984 "uuid": "882bc79c-ff92-4729-be9c-24365132af3b", 00:10:31.984 "assigned_rate_limits": { 00:10:31.984 "rw_ios_per_sec": 0, 00:10:31.984 "rw_mbytes_per_sec": 0, 00:10:31.984 "r_mbytes_per_sec": 0, 00:10:31.984 "w_mbytes_per_sec": 0 00:10:31.984 }, 00:10:31.984 "claimed": true, 00:10:31.984 "claim_type": "exclusive_write", 00:10:31.984 "zoned": false, 00:10:31.984 "supported_io_types": { 00:10:31.984 "read": true, 00:10:31.984 "write": true, 00:10:31.984 "unmap": true, 00:10:31.984 "flush": true, 00:10:31.984 "reset": true, 00:10:31.984 "nvme_admin": false, 00:10:31.984 "nvme_io": false, 00:10:31.984 "nvme_io_md": false, 00:10:31.984 "write_zeroes": true, 00:10:31.984 "zcopy": true, 00:10:31.984 "get_zone_info": false, 00:10:31.984 "zone_management": false, 00:10:31.984 "zone_append": false, 00:10:31.984 "compare": false, 00:10:31.984 "compare_and_write": false, 00:10:31.984 "abort": true, 00:10:31.984 "seek_hole": false, 00:10:31.984 "seek_data": false, 00:10:31.984 "copy": true, 00:10:31.984 "nvme_iov_md": false 00:10:31.984 }, 00:10:31.984 "memory_domains": [ 00:10:31.984 { 00:10:31.984 "dma_device_id": "system", 00:10:31.984 "dma_device_type": 1 00:10:31.984 }, 00:10:31.984 { 00:10:31.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.984 "dma_device_type": 2 00:10:31.984 } 00:10:31.984 ], 00:10:31.984 "driver_specific": {} 00:10:31.984 } 00:10:31.984 ] 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.984 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.985 "name": "Existed_Raid", 00:10:31.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.985 "strip_size_kb": 64, 00:10:31.985 "state": "configuring", 00:10:31.985 "raid_level": "raid0", 00:10:31.985 "superblock": false, 00:10:31.985 "num_base_bdevs": 4, 00:10:31.985 "num_base_bdevs_discovered": 2, 00:10:31.985 "num_base_bdevs_operational": 4, 00:10:31.985 "base_bdevs_list": [ 00:10:31.985 { 00:10:31.985 "name": "BaseBdev1", 00:10:31.985 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:31.985 "is_configured": true, 00:10:31.985 "data_offset": 0, 00:10:31.985 "data_size": 65536 00:10:31.985 }, 00:10:31.985 { 00:10:31.985 "name": "BaseBdev2", 00:10:31.985 "uuid": "882bc79c-ff92-4729-be9c-24365132af3b", 00:10:31.985 "is_configured": true, 00:10:31.985 "data_offset": 0, 00:10:31.985 "data_size": 65536 00:10:31.985 }, 00:10:31.985 { 00:10:31.985 "name": "BaseBdev3", 00:10:31.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.985 "is_configured": false, 00:10:31.985 "data_offset": 0, 00:10:31.985 "data_size": 0 00:10:31.985 }, 00:10:31.985 { 00:10:31.985 "name": "BaseBdev4", 00:10:31.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.985 "is_configured": false, 00:10:31.985 "data_offset": 0, 00:10:31.985 "data_size": 0 00:10:31.985 } 00:10:31.985 ] 00:10:31.985 }' 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.985 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.552 [2024-11-10 15:19:38.704482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.552 BaseBdev3 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.552 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.552 [ 00:10:32.552 { 00:10:32.552 "name": "BaseBdev3", 00:10:32.552 "aliases": [ 00:10:32.552 "9e47d7d2-0f76-49c7-83fa-8ea089c54482" 00:10:32.552 ], 00:10:32.552 "product_name": "Malloc disk", 00:10:32.552 "block_size": 512, 00:10:32.552 "num_blocks": 65536, 00:10:32.552 "uuid": "9e47d7d2-0f76-49c7-83fa-8ea089c54482", 00:10:32.552 "assigned_rate_limits": { 00:10:32.552 "rw_ios_per_sec": 0, 00:10:32.552 "rw_mbytes_per_sec": 0, 00:10:32.552 "r_mbytes_per_sec": 0, 00:10:32.552 "w_mbytes_per_sec": 0 00:10:32.552 }, 00:10:32.552 "claimed": true, 00:10:32.552 "claim_type": "exclusive_write", 00:10:32.552 "zoned": false, 00:10:32.552 "supported_io_types": { 00:10:32.552 "read": true, 00:10:32.552 "write": true, 00:10:32.552 "unmap": true, 00:10:32.553 "flush": true, 00:10:32.553 "reset": true, 00:10:32.553 "nvme_admin": false, 00:10:32.553 "nvme_io": false, 00:10:32.553 "nvme_io_md": false, 00:10:32.553 "write_zeroes": true, 00:10:32.553 "zcopy": true, 00:10:32.553 "get_zone_info": false, 00:10:32.553 "zone_management": false, 00:10:32.553 "zone_append": false, 00:10:32.553 "compare": false, 00:10:32.553 "compare_and_write": false, 00:10:32.553 "abort": true, 00:10:32.553 "seek_hole": false, 00:10:32.553 "seek_data": false, 00:10:32.553 "copy": true, 00:10:32.553 "nvme_iov_md": false 00:10:32.553 }, 00:10:32.553 "memory_domains": [ 00:10:32.553 { 00:10:32.553 "dma_device_id": "system", 00:10:32.553 "dma_device_type": 1 00:10:32.553 }, 00:10:32.553 { 00:10:32.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.553 "dma_device_type": 2 00:10:32.553 } 00:10:32.553 ], 00:10:32.553 "driver_specific": {} 00:10:32.553 } 00:10:32.553 ] 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.553 "name": "Existed_Raid", 00:10:32.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.553 "strip_size_kb": 64, 00:10:32.553 "state": "configuring", 00:10:32.553 "raid_level": "raid0", 00:10:32.553 "superblock": false, 00:10:32.553 "num_base_bdevs": 4, 00:10:32.553 "num_base_bdevs_discovered": 3, 00:10:32.553 "num_base_bdevs_operational": 4, 00:10:32.553 "base_bdevs_list": [ 00:10:32.553 { 00:10:32.553 "name": "BaseBdev1", 00:10:32.553 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:32.553 "is_configured": true, 00:10:32.553 "data_offset": 0, 00:10:32.553 "data_size": 65536 00:10:32.553 }, 00:10:32.553 { 00:10:32.553 "name": "BaseBdev2", 00:10:32.553 "uuid": "882bc79c-ff92-4729-be9c-24365132af3b", 00:10:32.553 "is_configured": true, 00:10:32.553 "data_offset": 0, 00:10:32.553 "data_size": 65536 00:10:32.553 }, 00:10:32.553 { 00:10:32.553 "name": "BaseBdev3", 00:10:32.553 "uuid": "9e47d7d2-0f76-49c7-83fa-8ea089c54482", 00:10:32.553 "is_configured": true, 00:10:32.553 "data_offset": 0, 00:10:32.553 "data_size": 65536 00:10:32.553 }, 00:10:32.553 { 00:10:32.553 "name": "BaseBdev4", 00:10:32.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.553 "is_configured": false, 00:10:32.553 "data_offset": 0, 00:10:32.553 "data_size": 0 00:10:32.553 } 00:10:32.553 ] 00:10:32.553 }' 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.553 15:19:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.122 [2024-11-10 15:19:39.215871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:33.122 [2024-11-10 15:19:39.215923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:33.122 [2024-11-10 15:19:39.215937] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:33.122 [2024-11-10 15:19:39.216212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:33.122 [2024-11-10 15:19:39.216355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:33.122 [2024-11-10 15:19:39.216369] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:33.122 [2024-11-10 15:19:39.216575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.122 BaseBdev4 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.122 [ 00:10:33.122 { 00:10:33.122 "name": "BaseBdev4", 00:10:33.122 "aliases": [ 00:10:33.122 "2d96e7b2-d890-45ff-9123-934b17413d8c" 00:10:33.122 ], 00:10:33.122 "product_name": "Malloc disk", 00:10:33.122 "block_size": 512, 00:10:33.122 "num_blocks": 65536, 00:10:33.122 "uuid": "2d96e7b2-d890-45ff-9123-934b17413d8c", 00:10:33.122 "assigned_rate_limits": { 00:10:33.122 "rw_ios_per_sec": 0, 00:10:33.122 "rw_mbytes_per_sec": 0, 00:10:33.122 "r_mbytes_per_sec": 0, 00:10:33.122 "w_mbytes_per_sec": 0 00:10:33.122 }, 00:10:33.122 "claimed": true, 00:10:33.122 "claim_type": "exclusive_write", 00:10:33.122 "zoned": false, 00:10:33.122 "supported_io_types": { 00:10:33.122 "read": true, 00:10:33.122 "write": true, 00:10:33.122 "unmap": true, 00:10:33.122 "flush": true, 00:10:33.122 "reset": true, 00:10:33.122 "nvme_admin": false, 00:10:33.122 "nvme_io": false, 00:10:33.122 "nvme_io_md": false, 00:10:33.122 "write_zeroes": true, 00:10:33.122 "zcopy": true, 00:10:33.122 "get_zone_info": false, 00:10:33.122 "zone_management": false, 00:10:33.122 "zone_append": false, 00:10:33.122 "compare": false, 00:10:33.122 "compare_and_write": false, 00:10:33.122 "abort": true, 00:10:33.122 "seek_hole": false, 00:10:33.122 "seek_data": false, 00:10:33.122 "copy": true, 00:10:33.122 "nvme_iov_md": false 00:10:33.122 }, 00:10:33.122 "memory_domains": [ 00:10:33.122 { 00:10:33.122 "dma_device_id": "system", 00:10:33.122 "dma_device_type": 1 00:10:33.122 }, 00:10:33.122 { 00:10:33.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.122 "dma_device_type": 2 00:10:33.122 } 00:10:33.122 ], 00:10:33.122 "driver_specific": {} 00:10:33.122 } 00:10:33.122 ] 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.122 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.123 "name": "Existed_Raid", 00:10:33.123 "uuid": "92126dee-085b-4401-8cb1-5d39f061706c", 00:10:33.123 "strip_size_kb": 64, 00:10:33.123 "state": "online", 00:10:33.123 "raid_level": "raid0", 00:10:33.123 "superblock": false, 00:10:33.123 "num_base_bdevs": 4, 00:10:33.123 "num_base_bdevs_discovered": 4, 00:10:33.123 "num_base_bdevs_operational": 4, 00:10:33.123 "base_bdevs_list": [ 00:10:33.123 { 00:10:33.123 "name": "BaseBdev1", 00:10:33.123 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:33.123 "is_configured": true, 00:10:33.123 "data_offset": 0, 00:10:33.123 "data_size": 65536 00:10:33.123 }, 00:10:33.123 { 00:10:33.123 "name": "BaseBdev2", 00:10:33.123 "uuid": "882bc79c-ff92-4729-be9c-24365132af3b", 00:10:33.123 "is_configured": true, 00:10:33.123 "data_offset": 0, 00:10:33.123 "data_size": 65536 00:10:33.123 }, 00:10:33.123 { 00:10:33.123 "name": "BaseBdev3", 00:10:33.123 "uuid": "9e47d7d2-0f76-49c7-83fa-8ea089c54482", 00:10:33.123 "is_configured": true, 00:10:33.123 "data_offset": 0, 00:10:33.123 "data_size": 65536 00:10:33.123 }, 00:10:33.123 { 00:10:33.123 "name": "BaseBdev4", 00:10:33.123 "uuid": "2d96e7b2-d890-45ff-9123-934b17413d8c", 00:10:33.123 "is_configured": true, 00:10:33.123 "data_offset": 0, 00:10:33.123 "data_size": 65536 00:10:33.123 } 00:10:33.123 ] 00:10:33.123 }' 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.123 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.382 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:33.382 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:33.382 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.383 [2024-11-10 15:19:39.684382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:33.383 "name": "Existed_Raid", 00:10:33.383 "aliases": [ 00:10:33.383 "92126dee-085b-4401-8cb1-5d39f061706c" 00:10:33.383 ], 00:10:33.383 "product_name": "Raid Volume", 00:10:33.383 "block_size": 512, 00:10:33.383 "num_blocks": 262144, 00:10:33.383 "uuid": "92126dee-085b-4401-8cb1-5d39f061706c", 00:10:33.383 "assigned_rate_limits": { 00:10:33.383 "rw_ios_per_sec": 0, 00:10:33.383 "rw_mbytes_per_sec": 0, 00:10:33.383 "r_mbytes_per_sec": 0, 00:10:33.383 "w_mbytes_per_sec": 0 00:10:33.383 }, 00:10:33.383 "claimed": false, 00:10:33.383 "zoned": false, 00:10:33.383 "supported_io_types": { 00:10:33.383 "read": true, 00:10:33.383 "write": true, 00:10:33.383 "unmap": true, 00:10:33.383 "flush": true, 00:10:33.383 "reset": true, 00:10:33.383 "nvme_admin": false, 00:10:33.383 "nvme_io": false, 00:10:33.383 "nvme_io_md": false, 00:10:33.383 "write_zeroes": true, 00:10:33.383 "zcopy": false, 00:10:33.383 "get_zone_info": false, 00:10:33.383 "zone_management": false, 00:10:33.383 "zone_append": false, 00:10:33.383 "compare": false, 00:10:33.383 "compare_and_write": false, 00:10:33.383 "abort": false, 00:10:33.383 "seek_hole": false, 00:10:33.383 "seek_data": false, 00:10:33.383 "copy": false, 00:10:33.383 "nvme_iov_md": false 00:10:33.383 }, 00:10:33.383 "memory_domains": [ 00:10:33.383 { 00:10:33.383 "dma_device_id": "system", 00:10:33.383 "dma_device_type": 1 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.383 "dma_device_type": 2 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "system", 00:10:33.383 "dma_device_type": 1 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.383 "dma_device_type": 2 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "system", 00:10:33.383 "dma_device_type": 1 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.383 "dma_device_type": 2 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "system", 00:10:33.383 "dma_device_type": 1 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.383 "dma_device_type": 2 00:10:33.383 } 00:10:33.383 ], 00:10:33.383 "driver_specific": { 00:10:33.383 "raid": { 00:10:33.383 "uuid": "92126dee-085b-4401-8cb1-5d39f061706c", 00:10:33.383 "strip_size_kb": 64, 00:10:33.383 "state": "online", 00:10:33.383 "raid_level": "raid0", 00:10:33.383 "superblock": false, 00:10:33.383 "num_base_bdevs": 4, 00:10:33.383 "num_base_bdevs_discovered": 4, 00:10:33.383 "num_base_bdevs_operational": 4, 00:10:33.383 "base_bdevs_list": [ 00:10:33.383 { 00:10:33.383 "name": "BaseBdev1", 00:10:33.383 "uuid": "1a020fdb-102a-4f66-b054-6cd8b880601d", 00:10:33.383 "is_configured": true, 00:10:33.383 "data_offset": 0, 00:10:33.383 "data_size": 65536 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "name": "BaseBdev2", 00:10:33.383 "uuid": "882bc79c-ff92-4729-be9c-24365132af3b", 00:10:33.383 "is_configured": true, 00:10:33.383 "data_offset": 0, 00:10:33.383 "data_size": 65536 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "name": "BaseBdev3", 00:10:33.383 "uuid": "9e47d7d2-0f76-49c7-83fa-8ea089c54482", 00:10:33.383 "is_configured": true, 00:10:33.383 "data_offset": 0, 00:10:33.383 "data_size": 65536 00:10:33.383 }, 00:10:33.383 { 00:10:33.383 "name": "BaseBdev4", 00:10:33.383 "uuid": "2d96e7b2-d890-45ff-9123-934b17413d8c", 00:10:33.383 "is_configured": true, 00:10:33.383 "data_offset": 0, 00:10:33.383 "data_size": 65536 00:10:33.383 } 00:10:33.383 ] 00:10:33.383 } 00:10:33.383 } 00:10:33.383 }' 00:10:33.383 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:33.640 BaseBdev2 00:10:33.640 BaseBdev3 00:10:33.640 BaseBdev4' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:33.640 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:33.641 15:19:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:33.641 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.641 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.641 15:19:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.899 [2024-11-10 15:19:40.028265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.899 [2024-11-10 15:19:40.028301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.899 [2024-11-10 15:19:40.028354] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.899 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.899 "name": "Existed_Raid", 00:10:33.899 "uuid": "92126dee-085b-4401-8cb1-5d39f061706c", 00:10:33.899 "strip_size_kb": 64, 00:10:33.899 "state": "offline", 00:10:33.899 "raid_level": "raid0", 00:10:33.899 "superblock": false, 00:10:33.899 "num_base_bdevs": 4, 00:10:33.899 "num_base_bdevs_discovered": 3, 00:10:33.899 "num_base_bdevs_operational": 3, 00:10:33.899 "base_bdevs_list": [ 00:10:33.899 { 00:10:33.900 "name": null, 00:10:33.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.900 "is_configured": false, 00:10:33.900 "data_offset": 0, 00:10:33.900 "data_size": 65536 00:10:33.900 }, 00:10:33.900 { 00:10:33.900 "name": "BaseBdev2", 00:10:33.900 "uuid": "882bc79c-ff92-4729-be9c-24365132af3b", 00:10:33.900 "is_configured": true, 00:10:33.900 "data_offset": 0, 00:10:33.900 "data_size": 65536 00:10:33.900 }, 00:10:33.900 { 00:10:33.900 "name": "BaseBdev3", 00:10:33.900 "uuid": "9e47d7d2-0f76-49c7-83fa-8ea089c54482", 00:10:33.900 "is_configured": true, 00:10:33.900 "data_offset": 0, 00:10:33.900 "data_size": 65536 00:10:33.900 }, 00:10:33.900 { 00:10:33.900 "name": "BaseBdev4", 00:10:33.900 "uuid": "2d96e7b2-d890-45ff-9123-934b17413d8c", 00:10:33.900 "is_configured": true, 00:10:33.900 "data_offset": 0, 00:10:33.900 "data_size": 65536 00:10:33.900 } 00:10:33.900 ] 00:10:33.900 }' 00:10:33.900 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.900 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.160 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 [2024-11-10 15:19:40.511862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 [2024-11-10 15:19:40.583657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 [2024-11-10 15:19:40.643333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:34.420 [2024-11-10 15:19:40.643400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 BaseBdev2 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.420 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.420 [ 00:10:34.420 { 00:10:34.420 "name": "BaseBdev2", 00:10:34.420 "aliases": [ 00:10:34.420 "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b" 00:10:34.420 ], 00:10:34.420 "product_name": "Malloc disk", 00:10:34.420 "block_size": 512, 00:10:34.420 "num_blocks": 65536, 00:10:34.420 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:34.420 "assigned_rate_limits": { 00:10:34.420 "rw_ios_per_sec": 0, 00:10:34.420 "rw_mbytes_per_sec": 0, 00:10:34.420 "r_mbytes_per_sec": 0, 00:10:34.420 "w_mbytes_per_sec": 0 00:10:34.420 }, 00:10:34.420 "claimed": false, 00:10:34.420 "zoned": false, 00:10:34.420 "supported_io_types": { 00:10:34.420 "read": true, 00:10:34.420 "write": true, 00:10:34.420 "unmap": true, 00:10:34.420 "flush": true, 00:10:34.420 "reset": true, 00:10:34.420 "nvme_admin": false, 00:10:34.420 "nvme_io": false, 00:10:34.420 "nvme_io_md": false, 00:10:34.420 "write_zeroes": true, 00:10:34.420 "zcopy": true, 00:10:34.420 "get_zone_info": false, 00:10:34.420 "zone_management": false, 00:10:34.420 "zone_append": false, 00:10:34.420 "compare": false, 00:10:34.420 "compare_and_write": false, 00:10:34.420 "abort": true, 00:10:34.420 "seek_hole": false, 00:10:34.420 "seek_data": false, 00:10:34.420 "copy": true, 00:10:34.420 "nvme_iov_md": false 00:10:34.420 }, 00:10:34.420 "memory_domains": [ 00:10:34.420 { 00:10:34.420 "dma_device_id": "system", 00:10:34.420 "dma_device_type": 1 00:10:34.420 }, 00:10:34.421 { 00:10:34.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.421 "dma_device_type": 2 00:10:34.421 } 00:10:34.421 ], 00:10:34.421 "driver_specific": {} 00:10:34.421 } 00:10:34.421 ] 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.421 BaseBdev3 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.421 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.681 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.681 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.681 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.681 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.681 [ 00:10:34.681 { 00:10:34.681 "name": "BaseBdev3", 00:10:34.681 "aliases": [ 00:10:34.681 "909ddf7c-cb8b-4eae-b7e1-35209ad055dc" 00:10:34.681 ], 00:10:34.681 "product_name": "Malloc disk", 00:10:34.681 "block_size": 512, 00:10:34.681 "num_blocks": 65536, 00:10:34.681 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:34.681 "assigned_rate_limits": { 00:10:34.681 "rw_ios_per_sec": 0, 00:10:34.681 "rw_mbytes_per_sec": 0, 00:10:34.681 "r_mbytes_per_sec": 0, 00:10:34.681 "w_mbytes_per_sec": 0 00:10:34.681 }, 00:10:34.681 "claimed": false, 00:10:34.681 "zoned": false, 00:10:34.681 "supported_io_types": { 00:10:34.681 "read": true, 00:10:34.681 "write": true, 00:10:34.681 "unmap": true, 00:10:34.681 "flush": true, 00:10:34.681 "reset": true, 00:10:34.681 "nvme_admin": false, 00:10:34.681 "nvme_io": false, 00:10:34.681 "nvme_io_md": false, 00:10:34.681 "write_zeroes": true, 00:10:34.681 "zcopy": true, 00:10:34.681 "get_zone_info": false, 00:10:34.681 "zone_management": false, 00:10:34.681 "zone_append": false, 00:10:34.681 "compare": false, 00:10:34.681 "compare_and_write": false, 00:10:34.681 "abort": true, 00:10:34.681 "seek_hole": false, 00:10:34.681 "seek_data": false, 00:10:34.681 "copy": true, 00:10:34.681 "nvme_iov_md": false 00:10:34.681 }, 00:10:34.681 "memory_domains": [ 00:10:34.681 { 00:10:34.681 "dma_device_id": "system", 00:10:34.681 "dma_device_type": 1 00:10:34.681 }, 00:10:34.681 { 00:10:34.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.681 "dma_device_type": 2 00:10:34.681 } 00:10:34.681 ], 00:10:34.681 "driver_specific": {} 00:10:34.681 } 00:10:34.681 ] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.682 BaseBdev4 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.682 [ 00:10:34.682 { 00:10:34.682 "name": "BaseBdev4", 00:10:34.682 "aliases": [ 00:10:34.682 "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e" 00:10:34.682 ], 00:10:34.682 "product_name": "Malloc disk", 00:10:34.682 "block_size": 512, 00:10:34.682 "num_blocks": 65536, 00:10:34.682 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:34.682 "assigned_rate_limits": { 00:10:34.682 "rw_ios_per_sec": 0, 00:10:34.682 "rw_mbytes_per_sec": 0, 00:10:34.682 "r_mbytes_per_sec": 0, 00:10:34.682 "w_mbytes_per_sec": 0 00:10:34.682 }, 00:10:34.682 "claimed": false, 00:10:34.682 "zoned": false, 00:10:34.682 "supported_io_types": { 00:10:34.682 "read": true, 00:10:34.682 "write": true, 00:10:34.682 "unmap": true, 00:10:34.682 "flush": true, 00:10:34.682 "reset": true, 00:10:34.682 "nvme_admin": false, 00:10:34.682 "nvme_io": false, 00:10:34.682 "nvme_io_md": false, 00:10:34.682 "write_zeroes": true, 00:10:34.682 "zcopy": true, 00:10:34.682 "get_zone_info": false, 00:10:34.682 "zone_management": false, 00:10:34.682 "zone_append": false, 00:10:34.682 "compare": false, 00:10:34.682 "compare_and_write": false, 00:10:34.682 "abort": true, 00:10:34.682 "seek_hole": false, 00:10:34.682 "seek_data": false, 00:10:34.682 "copy": true, 00:10:34.682 "nvme_iov_md": false 00:10:34.682 }, 00:10:34.682 "memory_domains": [ 00:10:34.682 { 00:10:34.682 "dma_device_id": "system", 00:10:34.682 "dma_device_type": 1 00:10:34.682 }, 00:10:34.682 { 00:10:34.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.682 "dma_device_type": 2 00:10:34.682 } 00:10:34.682 ], 00:10:34.682 "driver_specific": {} 00:10:34.682 } 00:10:34.682 ] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.682 [2024-11-10 15:19:40.874926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.682 [2024-11-10 15:19:40.874976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.682 [2024-11-10 15:19:40.874998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.682 [2024-11-10 15:19:40.877192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.682 [2024-11-10 15:19:40.877248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.682 "name": "Existed_Raid", 00:10:34.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.682 "strip_size_kb": 64, 00:10:34.682 "state": "configuring", 00:10:34.682 "raid_level": "raid0", 00:10:34.682 "superblock": false, 00:10:34.682 "num_base_bdevs": 4, 00:10:34.682 "num_base_bdevs_discovered": 3, 00:10:34.682 "num_base_bdevs_operational": 4, 00:10:34.682 "base_bdevs_list": [ 00:10:34.682 { 00:10:34.682 "name": "BaseBdev1", 00:10:34.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.682 "is_configured": false, 00:10:34.682 "data_offset": 0, 00:10:34.682 "data_size": 0 00:10:34.682 }, 00:10:34.682 { 00:10:34.682 "name": "BaseBdev2", 00:10:34.682 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:34.682 "is_configured": true, 00:10:34.682 "data_offset": 0, 00:10:34.682 "data_size": 65536 00:10:34.682 }, 00:10:34.682 { 00:10:34.682 "name": "BaseBdev3", 00:10:34.682 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:34.682 "is_configured": true, 00:10:34.682 "data_offset": 0, 00:10:34.682 "data_size": 65536 00:10:34.682 }, 00:10:34.682 { 00:10:34.682 "name": "BaseBdev4", 00:10:34.682 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:34.682 "is_configured": true, 00:10:34.682 "data_offset": 0, 00:10:34.682 "data_size": 65536 00:10:34.682 } 00:10:34.682 ] 00:10:34.682 }' 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.682 15:19:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.992 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:34.992 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.992 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.992 [2024-11-10 15:19:41.331069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.268 "name": "Existed_Raid", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "strip_size_kb": 64, 00:10:35.268 "state": "configuring", 00:10:35.268 "raid_level": "raid0", 00:10:35.268 "superblock": false, 00:10:35.268 "num_base_bdevs": 4, 00:10:35.268 "num_base_bdevs_discovered": 2, 00:10:35.268 "num_base_bdevs_operational": 4, 00:10:35.268 "base_bdevs_list": [ 00:10:35.268 { 00:10:35.268 "name": "BaseBdev1", 00:10:35.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.268 "is_configured": false, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 0 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": null, 00:10:35.268 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:35.268 "is_configured": false, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": "BaseBdev3", 00:10:35.268 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:35.268 "is_configured": true, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 }, 00:10:35.268 { 00:10:35.268 "name": "BaseBdev4", 00:10:35.268 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:35.268 "is_configured": true, 00:10:35.268 "data_offset": 0, 00:10:35.268 "data_size": 65536 00:10:35.268 } 00:10:35.268 ] 00:10:35.268 }' 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.268 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.526 [2024-11-10 15:19:41.850468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.526 BaseBdev1 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.526 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.526 [ 00:10:35.526 { 00:10:35.526 "name": "BaseBdev1", 00:10:35.526 "aliases": [ 00:10:35.527 "8ae29221-f441-4410-81db-3007eb5efe6f" 00:10:35.527 ], 00:10:35.527 "product_name": "Malloc disk", 00:10:35.527 "block_size": 512, 00:10:35.527 "num_blocks": 65536, 00:10:35.527 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:35.527 "assigned_rate_limits": { 00:10:35.527 "rw_ios_per_sec": 0, 00:10:35.527 "rw_mbytes_per_sec": 0, 00:10:35.527 "r_mbytes_per_sec": 0, 00:10:35.527 "w_mbytes_per_sec": 0 00:10:35.527 }, 00:10:35.527 "claimed": true, 00:10:35.527 "claim_type": "exclusive_write", 00:10:35.527 "zoned": false, 00:10:35.527 "supported_io_types": { 00:10:35.527 "read": true, 00:10:35.527 "write": true, 00:10:35.527 "unmap": true, 00:10:35.527 "flush": true, 00:10:35.527 "reset": true, 00:10:35.527 "nvme_admin": false, 00:10:35.527 "nvme_io": false, 00:10:35.527 "nvme_io_md": false, 00:10:35.527 "write_zeroes": true, 00:10:35.527 "zcopy": true, 00:10:35.527 "get_zone_info": false, 00:10:35.527 "zone_management": false, 00:10:35.527 "zone_append": false, 00:10:35.527 "compare": false, 00:10:35.527 "compare_and_write": false, 00:10:35.527 "abort": true, 00:10:35.527 "seek_hole": false, 00:10:35.786 "seek_data": false, 00:10:35.786 "copy": true, 00:10:35.786 "nvme_iov_md": false 00:10:35.786 }, 00:10:35.786 "memory_domains": [ 00:10:35.786 { 00:10:35.786 "dma_device_id": "system", 00:10:35.786 "dma_device_type": 1 00:10:35.786 }, 00:10:35.786 { 00:10:35.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.786 "dma_device_type": 2 00:10:35.786 } 00:10:35.786 ], 00:10:35.786 "driver_specific": {} 00:10:35.786 } 00:10:35.786 ] 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.786 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.786 "name": "Existed_Raid", 00:10:35.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.786 "strip_size_kb": 64, 00:10:35.786 "state": "configuring", 00:10:35.786 "raid_level": "raid0", 00:10:35.786 "superblock": false, 00:10:35.786 "num_base_bdevs": 4, 00:10:35.786 "num_base_bdevs_discovered": 3, 00:10:35.786 "num_base_bdevs_operational": 4, 00:10:35.786 "base_bdevs_list": [ 00:10:35.786 { 00:10:35.786 "name": "BaseBdev1", 00:10:35.786 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:35.786 "is_configured": true, 00:10:35.786 "data_offset": 0, 00:10:35.786 "data_size": 65536 00:10:35.786 }, 00:10:35.786 { 00:10:35.786 "name": null, 00:10:35.786 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:35.786 "is_configured": false, 00:10:35.787 "data_offset": 0, 00:10:35.787 "data_size": 65536 00:10:35.787 }, 00:10:35.787 { 00:10:35.787 "name": "BaseBdev3", 00:10:35.787 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:35.787 "is_configured": true, 00:10:35.787 "data_offset": 0, 00:10:35.787 "data_size": 65536 00:10:35.787 }, 00:10:35.787 { 00:10:35.787 "name": "BaseBdev4", 00:10:35.787 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:35.787 "is_configured": true, 00:10:35.787 "data_offset": 0, 00:10:35.787 "data_size": 65536 00:10:35.787 } 00:10:35.787 ] 00:10:35.787 }' 00:10:35.787 15:19:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.787 15:19:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.046 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:36.046 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.046 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.046 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.306 [2024-11-10 15:19:42.422723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.306 "name": "Existed_Raid", 00:10:36.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.306 "strip_size_kb": 64, 00:10:36.306 "state": "configuring", 00:10:36.306 "raid_level": "raid0", 00:10:36.306 "superblock": false, 00:10:36.306 "num_base_bdevs": 4, 00:10:36.306 "num_base_bdevs_discovered": 2, 00:10:36.306 "num_base_bdevs_operational": 4, 00:10:36.306 "base_bdevs_list": [ 00:10:36.306 { 00:10:36.306 "name": "BaseBdev1", 00:10:36.306 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:36.306 "is_configured": true, 00:10:36.306 "data_offset": 0, 00:10:36.306 "data_size": 65536 00:10:36.306 }, 00:10:36.306 { 00:10:36.306 "name": null, 00:10:36.306 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:36.306 "is_configured": false, 00:10:36.306 "data_offset": 0, 00:10:36.306 "data_size": 65536 00:10:36.306 }, 00:10:36.306 { 00:10:36.306 "name": null, 00:10:36.306 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:36.306 "is_configured": false, 00:10:36.306 "data_offset": 0, 00:10:36.306 "data_size": 65536 00:10:36.306 }, 00:10:36.306 { 00:10:36.306 "name": "BaseBdev4", 00:10:36.306 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:36.306 "is_configured": true, 00:10:36.306 "data_offset": 0, 00:10:36.306 "data_size": 65536 00:10:36.306 } 00:10:36.306 ] 00:10:36.306 }' 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.306 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.566 [2024-11-10 15:19:42.914943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.566 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.826 "name": "Existed_Raid", 00:10:36.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.826 "strip_size_kb": 64, 00:10:36.826 "state": "configuring", 00:10:36.826 "raid_level": "raid0", 00:10:36.826 "superblock": false, 00:10:36.826 "num_base_bdevs": 4, 00:10:36.826 "num_base_bdevs_discovered": 3, 00:10:36.826 "num_base_bdevs_operational": 4, 00:10:36.826 "base_bdevs_list": [ 00:10:36.826 { 00:10:36.826 "name": "BaseBdev1", 00:10:36.826 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:36.826 "is_configured": true, 00:10:36.826 "data_offset": 0, 00:10:36.826 "data_size": 65536 00:10:36.826 }, 00:10:36.826 { 00:10:36.826 "name": null, 00:10:36.826 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:36.826 "is_configured": false, 00:10:36.826 "data_offset": 0, 00:10:36.826 "data_size": 65536 00:10:36.826 }, 00:10:36.826 { 00:10:36.826 "name": "BaseBdev3", 00:10:36.826 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:36.826 "is_configured": true, 00:10:36.826 "data_offset": 0, 00:10:36.826 "data_size": 65536 00:10:36.826 }, 00:10:36.826 { 00:10:36.826 "name": "BaseBdev4", 00:10:36.826 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:36.826 "is_configured": true, 00:10:36.826 "data_offset": 0, 00:10:36.826 "data_size": 65536 00:10:36.826 } 00:10:36.826 ] 00:10:36.826 }' 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.826 15:19:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.086 [2024-11-10 15:19:43.407118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.086 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.344 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.344 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.344 "name": "Existed_Raid", 00:10:37.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.344 "strip_size_kb": 64, 00:10:37.344 "state": "configuring", 00:10:37.344 "raid_level": "raid0", 00:10:37.344 "superblock": false, 00:10:37.344 "num_base_bdevs": 4, 00:10:37.344 "num_base_bdevs_discovered": 2, 00:10:37.344 "num_base_bdevs_operational": 4, 00:10:37.344 "base_bdevs_list": [ 00:10:37.344 { 00:10:37.344 "name": null, 00:10:37.344 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:37.344 "is_configured": false, 00:10:37.344 "data_offset": 0, 00:10:37.344 "data_size": 65536 00:10:37.344 }, 00:10:37.344 { 00:10:37.344 "name": null, 00:10:37.344 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:37.344 "is_configured": false, 00:10:37.344 "data_offset": 0, 00:10:37.344 "data_size": 65536 00:10:37.344 }, 00:10:37.344 { 00:10:37.344 "name": "BaseBdev3", 00:10:37.344 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:37.344 "is_configured": true, 00:10:37.344 "data_offset": 0, 00:10:37.344 "data_size": 65536 00:10:37.344 }, 00:10:37.344 { 00:10:37.344 "name": "BaseBdev4", 00:10:37.344 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:37.344 "is_configured": true, 00:10:37.344 "data_offset": 0, 00:10:37.344 "data_size": 65536 00:10:37.344 } 00:10:37.344 ] 00:10:37.344 }' 00:10:37.344 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.344 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.603 [2024-11-10 15:19:43.882220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.603 "name": "Existed_Raid", 00:10:37.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.603 "strip_size_kb": 64, 00:10:37.603 "state": "configuring", 00:10:37.603 "raid_level": "raid0", 00:10:37.603 "superblock": false, 00:10:37.603 "num_base_bdevs": 4, 00:10:37.603 "num_base_bdevs_discovered": 3, 00:10:37.603 "num_base_bdevs_operational": 4, 00:10:37.603 "base_bdevs_list": [ 00:10:37.603 { 00:10:37.603 "name": null, 00:10:37.603 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:37.603 "is_configured": false, 00:10:37.603 "data_offset": 0, 00:10:37.603 "data_size": 65536 00:10:37.603 }, 00:10:37.603 { 00:10:37.603 "name": "BaseBdev2", 00:10:37.603 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:37.603 "is_configured": true, 00:10:37.603 "data_offset": 0, 00:10:37.603 "data_size": 65536 00:10:37.603 }, 00:10:37.603 { 00:10:37.603 "name": "BaseBdev3", 00:10:37.603 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:37.603 "is_configured": true, 00:10:37.603 "data_offset": 0, 00:10:37.603 "data_size": 65536 00:10:37.603 }, 00:10:37.603 { 00:10:37.603 "name": "BaseBdev4", 00:10:37.603 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:37.603 "is_configured": true, 00:10:37.603 "data_offset": 0, 00:10:37.603 "data_size": 65536 00:10:37.603 } 00:10:37.603 ] 00:10:37.603 }' 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.603 15:19:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8ae29221-f441-4410-81db-3007eb5efe6f 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.170 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.170 [2024-11-10 15:19:44.425680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:38.170 [2024-11-10 15:19:44.425730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.170 [2024-11-10 15:19:44.425743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:38.170 [2024-11-10 15:19:44.426058] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:38.170 [2024-11-10 15:19:44.426199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.170 [2024-11-10 15:19:44.426216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:38.171 NewBaseBdev 00:10:38.171 [2024-11-10 15:19:44.426420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 [ 00:10:38.171 { 00:10:38.171 "name": "NewBaseBdev", 00:10:38.171 "aliases": [ 00:10:38.171 "8ae29221-f441-4410-81db-3007eb5efe6f" 00:10:38.171 ], 00:10:38.171 "product_name": "Malloc disk", 00:10:38.171 "block_size": 512, 00:10:38.171 "num_blocks": 65536, 00:10:38.171 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:38.171 "assigned_rate_limits": { 00:10:38.171 "rw_ios_per_sec": 0, 00:10:38.171 "rw_mbytes_per_sec": 0, 00:10:38.171 "r_mbytes_per_sec": 0, 00:10:38.171 "w_mbytes_per_sec": 0 00:10:38.171 }, 00:10:38.171 "claimed": true, 00:10:38.171 "claim_type": "exclusive_write", 00:10:38.171 "zoned": false, 00:10:38.171 "supported_io_types": { 00:10:38.171 "read": true, 00:10:38.171 "write": true, 00:10:38.171 "unmap": true, 00:10:38.171 "flush": true, 00:10:38.171 "reset": true, 00:10:38.171 "nvme_admin": false, 00:10:38.171 "nvme_io": false, 00:10:38.171 "nvme_io_md": false, 00:10:38.171 "write_zeroes": true, 00:10:38.171 "zcopy": true, 00:10:38.171 "get_zone_info": false, 00:10:38.171 "zone_management": false, 00:10:38.171 "zone_append": false, 00:10:38.171 "compare": false, 00:10:38.171 "compare_and_write": false, 00:10:38.171 "abort": true, 00:10:38.171 "seek_hole": false, 00:10:38.171 "seek_data": false, 00:10:38.171 "copy": true, 00:10:38.171 "nvme_iov_md": false 00:10:38.171 }, 00:10:38.171 "memory_domains": [ 00:10:38.171 { 00:10:38.171 "dma_device_id": "system", 00:10:38.171 "dma_device_type": 1 00:10:38.171 }, 00:10:38.171 { 00:10:38.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.171 "dma_device_type": 2 00:10:38.171 } 00:10:38.171 ], 00:10:38.171 "driver_specific": {} 00:10:38.171 } 00:10:38.171 ] 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.171 "name": "Existed_Raid", 00:10:38.171 "uuid": "6b788f17-4199-4284-a2d3-6e8328b4f58c", 00:10:38.171 "strip_size_kb": 64, 00:10:38.171 "state": "online", 00:10:38.171 "raid_level": "raid0", 00:10:38.171 "superblock": false, 00:10:38.171 "num_base_bdevs": 4, 00:10:38.171 "num_base_bdevs_discovered": 4, 00:10:38.171 "num_base_bdevs_operational": 4, 00:10:38.171 "base_bdevs_list": [ 00:10:38.171 { 00:10:38.171 "name": "NewBaseBdev", 00:10:38.171 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:38.171 "is_configured": true, 00:10:38.171 "data_offset": 0, 00:10:38.171 "data_size": 65536 00:10:38.171 }, 00:10:38.171 { 00:10:38.171 "name": "BaseBdev2", 00:10:38.171 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:38.171 "is_configured": true, 00:10:38.171 "data_offset": 0, 00:10:38.171 "data_size": 65536 00:10:38.171 }, 00:10:38.171 { 00:10:38.171 "name": "BaseBdev3", 00:10:38.171 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:38.171 "is_configured": true, 00:10:38.171 "data_offset": 0, 00:10:38.171 "data_size": 65536 00:10:38.171 }, 00:10:38.171 { 00:10:38.171 "name": "BaseBdev4", 00:10:38.171 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:38.171 "is_configured": true, 00:10:38.171 "data_offset": 0, 00:10:38.171 "data_size": 65536 00:10:38.171 } 00:10:38.171 ] 00:10:38.171 }' 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.171 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.739 [2024-11-10 15:19:44.950290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.739 "name": "Existed_Raid", 00:10:38.739 "aliases": [ 00:10:38.739 "6b788f17-4199-4284-a2d3-6e8328b4f58c" 00:10:38.739 ], 00:10:38.739 "product_name": "Raid Volume", 00:10:38.739 "block_size": 512, 00:10:38.739 "num_blocks": 262144, 00:10:38.739 "uuid": "6b788f17-4199-4284-a2d3-6e8328b4f58c", 00:10:38.739 "assigned_rate_limits": { 00:10:38.739 "rw_ios_per_sec": 0, 00:10:38.739 "rw_mbytes_per_sec": 0, 00:10:38.739 "r_mbytes_per_sec": 0, 00:10:38.739 "w_mbytes_per_sec": 0 00:10:38.739 }, 00:10:38.739 "claimed": false, 00:10:38.739 "zoned": false, 00:10:38.739 "supported_io_types": { 00:10:38.739 "read": true, 00:10:38.739 "write": true, 00:10:38.739 "unmap": true, 00:10:38.739 "flush": true, 00:10:38.739 "reset": true, 00:10:38.739 "nvme_admin": false, 00:10:38.739 "nvme_io": false, 00:10:38.739 "nvme_io_md": false, 00:10:38.739 "write_zeroes": true, 00:10:38.739 "zcopy": false, 00:10:38.739 "get_zone_info": false, 00:10:38.739 "zone_management": false, 00:10:38.739 "zone_append": false, 00:10:38.739 "compare": false, 00:10:38.739 "compare_and_write": false, 00:10:38.739 "abort": false, 00:10:38.739 "seek_hole": false, 00:10:38.739 "seek_data": false, 00:10:38.739 "copy": false, 00:10:38.739 "nvme_iov_md": false 00:10:38.739 }, 00:10:38.739 "memory_domains": [ 00:10:38.739 { 00:10:38.739 "dma_device_id": "system", 00:10:38.739 "dma_device_type": 1 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.739 "dma_device_type": 2 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "system", 00:10:38.739 "dma_device_type": 1 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.739 "dma_device_type": 2 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "system", 00:10:38.739 "dma_device_type": 1 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.739 "dma_device_type": 2 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "system", 00:10:38.739 "dma_device_type": 1 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.739 "dma_device_type": 2 00:10:38.739 } 00:10:38.739 ], 00:10:38.739 "driver_specific": { 00:10:38.739 "raid": { 00:10:38.739 "uuid": "6b788f17-4199-4284-a2d3-6e8328b4f58c", 00:10:38.739 "strip_size_kb": 64, 00:10:38.739 "state": "online", 00:10:38.739 "raid_level": "raid0", 00:10:38.739 "superblock": false, 00:10:38.739 "num_base_bdevs": 4, 00:10:38.739 "num_base_bdevs_discovered": 4, 00:10:38.739 "num_base_bdevs_operational": 4, 00:10:38.739 "base_bdevs_list": [ 00:10:38.739 { 00:10:38.739 "name": "NewBaseBdev", 00:10:38.739 "uuid": "8ae29221-f441-4410-81db-3007eb5efe6f", 00:10:38.739 "is_configured": true, 00:10:38.739 "data_offset": 0, 00:10:38.739 "data_size": 65536 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "name": "BaseBdev2", 00:10:38.739 "uuid": "e2d92126-7a6a-4ce3-a7c5-ffbc3e10885b", 00:10:38.739 "is_configured": true, 00:10:38.739 "data_offset": 0, 00:10:38.739 "data_size": 65536 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "name": "BaseBdev3", 00:10:38.739 "uuid": "909ddf7c-cb8b-4eae-b7e1-35209ad055dc", 00:10:38.739 "is_configured": true, 00:10:38.739 "data_offset": 0, 00:10:38.739 "data_size": 65536 00:10:38.739 }, 00:10:38.739 { 00:10:38.739 "name": "BaseBdev4", 00:10:38.739 "uuid": "9dec5fb4-1274-4e5b-9c1b-6b70705bf15e", 00:10:38.739 "is_configured": true, 00:10:38.739 "data_offset": 0, 00:10:38.739 "data_size": 65536 00:10:38.739 } 00:10:38.739 ] 00:10:38.739 } 00:10:38.739 } 00:10:38.739 }' 00:10:38.739 15:19:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.739 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:38.739 BaseBdev2 00:10:38.739 BaseBdev3 00:10:38.739 BaseBdev4' 00:10:38.739 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.740 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.740 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.740 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:38.740 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.740 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.740 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.999 [2024-11-10 15:19:45.281981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.999 [2024-11-10 15:19:45.282075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:38.999 [2024-11-10 15:19:45.282168] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:38.999 [2024-11-10 15:19:45.282255] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:38.999 [2024-11-10 15:19:45.282277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81736 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 81736 ']' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 81736 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81736 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81736' 00:10:38.999 killing process with pid 81736 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 81736 00:10:38.999 [2024-11-10 15:19:45.330909] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.999 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 81736 00:10:39.258 [2024-11-10 15:19:45.373450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.258 15:19:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:39.258 00:10:39.258 real 0m9.641s 00:10:39.258 user 0m16.543s 00:10:39.258 sys 0m1.985s 00:10:39.258 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:39.258 15:19:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.258 ************************************ 00:10:39.258 END TEST raid_state_function_test 00:10:39.258 ************************************ 00:10:39.517 15:19:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:39.517 15:19:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:39.517 15:19:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:39.517 15:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.517 ************************************ 00:10:39.517 START TEST raid_state_function_test_sb 00:10:39.517 ************************************ 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid0 4 true 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82385 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82385' 00:10:39.517 Process raid pid: 82385 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82385 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 82385 ']' 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:39.517 15:19:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.517 [2024-11-10 15:19:45.760655] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:39.517 [2024-11-10 15:19:45.760852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.775 [2024-11-10 15:19:45.897743] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:39.775 [2024-11-10 15:19:45.918362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.775 [2024-11-10 15:19:45.946633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.775 [2024-11-10 15:19:45.991960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.775 [2024-11-10 15:19:45.992079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.343 [2024-11-10 15:19:46.655615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.343 [2024-11-10 15:19:46.655729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.343 [2024-11-10 15:19:46.655781] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.343 [2024-11-10 15:19:46.655808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.343 [2024-11-10 15:19:46.655836] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.343 [2024-11-10 15:19:46.655859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.343 [2024-11-10 15:19:46.655919] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.343 [2024-11-10 15:19:46.655950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.343 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.603 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.603 "name": "Existed_Raid", 00:10:40.603 "uuid": "3b730da1-0af9-401b-89e4-7d418c09b1fe", 00:10:40.603 "strip_size_kb": 64, 00:10:40.603 "state": "configuring", 00:10:40.603 "raid_level": "raid0", 00:10:40.603 "superblock": true, 00:10:40.603 "num_base_bdevs": 4, 00:10:40.603 "num_base_bdevs_discovered": 0, 00:10:40.603 "num_base_bdevs_operational": 4, 00:10:40.603 "base_bdevs_list": [ 00:10:40.603 { 00:10:40.603 "name": "BaseBdev1", 00:10:40.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.603 "is_configured": false, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 0 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev2", 00:10:40.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.603 "is_configured": false, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 0 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev3", 00:10:40.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.603 "is_configured": false, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 0 00:10:40.603 }, 00:10:40.603 { 00:10:40.603 "name": "BaseBdev4", 00:10:40.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.603 "is_configured": false, 00:10:40.603 "data_offset": 0, 00:10:40.603 "data_size": 0 00:10:40.603 } 00:10:40.603 ] 00:10:40.603 }' 00:10:40.603 15:19:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.603 15:19:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 [2024-11-10 15:19:47.099617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.862 [2024-11-10 15:19:47.099657] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 [2024-11-10 15:19:47.111661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.862 [2024-11-10 15:19:47.111748] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.862 [2024-11-10 15:19:47.111790] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:40.862 [2024-11-10 15:19:47.111817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:40.862 [2024-11-10 15:19:47.111859] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:40.862 [2024-11-10 15:19:47.111884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:40.862 [2024-11-10 15:19:47.111952] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:40.862 [2024-11-10 15:19:47.111985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 [2024-11-10 15:19:47.133062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.862 BaseBdev1 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 [ 00:10:40.862 { 00:10:40.862 "name": "BaseBdev1", 00:10:40.862 "aliases": [ 00:10:40.862 "6c3c3f4a-9173-4a33-a03d-ad7fa339945e" 00:10:40.862 ], 00:10:40.862 "product_name": "Malloc disk", 00:10:40.862 "block_size": 512, 00:10:40.862 "num_blocks": 65536, 00:10:40.862 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:40.862 "assigned_rate_limits": { 00:10:40.862 "rw_ios_per_sec": 0, 00:10:40.862 "rw_mbytes_per_sec": 0, 00:10:40.862 "r_mbytes_per_sec": 0, 00:10:40.862 "w_mbytes_per_sec": 0 00:10:40.862 }, 00:10:40.862 "claimed": true, 00:10:40.862 "claim_type": "exclusive_write", 00:10:40.862 "zoned": false, 00:10:40.862 "supported_io_types": { 00:10:40.862 "read": true, 00:10:40.862 "write": true, 00:10:40.862 "unmap": true, 00:10:40.862 "flush": true, 00:10:40.862 "reset": true, 00:10:40.862 "nvme_admin": false, 00:10:40.862 "nvme_io": false, 00:10:40.862 "nvme_io_md": false, 00:10:40.862 "write_zeroes": true, 00:10:40.862 "zcopy": true, 00:10:40.862 "get_zone_info": false, 00:10:40.862 "zone_management": false, 00:10:40.862 "zone_append": false, 00:10:40.862 "compare": false, 00:10:40.862 "compare_and_write": false, 00:10:40.862 "abort": true, 00:10:40.862 "seek_hole": false, 00:10:40.862 "seek_data": false, 00:10:40.862 "copy": true, 00:10:40.862 "nvme_iov_md": false 00:10:40.862 }, 00:10:40.862 "memory_domains": [ 00:10:40.862 { 00:10:40.862 "dma_device_id": "system", 00:10:40.862 "dma_device_type": 1 00:10:40.862 }, 00:10:40.862 { 00:10:40.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.862 "dma_device_type": 2 00:10:40.862 } 00:10:40.862 ], 00:10:40.862 "driver_specific": {} 00:10:40.862 } 00:10:40.862 ] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.862 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.121 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.121 "name": "Existed_Raid", 00:10:41.121 "uuid": "28a67017-b918-4d48-a6d7-d5f7f7156d78", 00:10:41.121 "strip_size_kb": 64, 00:10:41.121 "state": "configuring", 00:10:41.121 "raid_level": "raid0", 00:10:41.121 "superblock": true, 00:10:41.121 "num_base_bdevs": 4, 00:10:41.121 "num_base_bdevs_discovered": 1, 00:10:41.121 "num_base_bdevs_operational": 4, 00:10:41.121 "base_bdevs_list": [ 00:10:41.121 { 00:10:41.121 "name": "BaseBdev1", 00:10:41.121 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:41.121 "is_configured": true, 00:10:41.121 "data_offset": 2048, 00:10:41.121 "data_size": 63488 00:10:41.121 }, 00:10:41.121 { 00:10:41.121 "name": "BaseBdev2", 00:10:41.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.121 "is_configured": false, 00:10:41.121 "data_offset": 0, 00:10:41.121 "data_size": 0 00:10:41.121 }, 00:10:41.121 { 00:10:41.121 "name": "BaseBdev3", 00:10:41.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.121 "is_configured": false, 00:10:41.121 "data_offset": 0, 00:10:41.122 "data_size": 0 00:10:41.122 }, 00:10:41.122 { 00:10:41.122 "name": "BaseBdev4", 00:10:41.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.122 "is_configured": false, 00:10:41.122 "data_offset": 0, 00:10:41.122 "data_size": 0 00:10:41.122 } 00:10:41.122 ] 00:10:41.122 }' 00:10:41.122 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.122 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.382 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:41.382 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.382 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.382 [2024-11-10 15:19:47.645303] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:41.383 [2024-11-10 15:19:47.645435] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.383 [2024-11-10 15:19:47.657348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:41.383 [2024-11-10 15:19:47.659602] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:41.383 [2024-11-10 15:19:47.659689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:41.383 [2024-11-10 15:19:47.659737] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:41.383 [2024-11-10 15:19:47.659772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:41.383 [2024-11-10 15:19:47.659807] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:41.383 [2024-11-10 15:19:47.659842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.383 "name": "Existed_Raid", 00:10:41.383 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:41.383 "strip_size_kb": 64, 00:10:41.383 "state": "configuring", 00:10:41.383 "raid_level": "raid0", 00:10:41.383 "superblock": true, 00:10:41.383 "num_base_bdevs": 4, 00:10:41.383 "num_base_bdevs_discovered": 1, 00:10:41.383 "num_base_bdevs_operational": 4, 00:10:41.383 "base_bdevs_list": [ 00:10:41.383 { 00:10:41.383 "name": "BaseBdev1", 00:10:41.383 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:41.383 "is_configured": true, 00:10:41.383 "data_offset": 2048, 00:10:41.383 "data_size": 63488 00:10:41.383 }, 00:10:41.383 { 00:10:41.383 "name": "BaseBdev2", 00:10:41.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.383 "is_configured": false, 00:10:41.383 "data_offset": 0, 00:10:41.383 "data_size": 0 00:10:41.383 }, 00:10:41.383 { 00:10:41.383 "name": "BaseBdev3", 00:10:41.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.383 "is_configured": false, 00:10:41.383 "data_offset": 0, 00:10:41.383 "data_size": 0 00:10:41.383 }, 00:10:41.383 { 00:10:41.383 "name": "BaseBdev4", 00:10:41.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.383 "is_configured": false, 00:10:41.383 "data_offset": 0, 00:10:41.383 "data_size": 0 00:10:41.383 } 00:10:41.383 ] 00:10:41.383 }' 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.383 15:19:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 [2024-11-10 15:19:48.148830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.952 BaseBdev2 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 [ 00:10:41.952 { 00:10:41.952 "name": "BaseBdev2", 00:10:41.952 "aliases": [ 00:10:41.952 "da094306-d252-4127-8d69-ee767f1a0a81" 00:10:41.952 ], 00:10:41.952 "product_name": "Malloc disk", 00:10:41.952 "block_size": 512, 00:10:41.952 "num_blocks": 65536, 00:10:41.952 "uuid": "da094306-d252-4127-8d69-ee767f1a0a81", 00:10:41.952 "assigned_rate_limits": { 00:10:41.952 "rw_ios_per_sec": 0, 00:10:41.952 "rw_mbytes_per_sec": 0, 00:10:41.952 "r_mbytes_per_sec": 0, 00:10:41.952 "w_mbytes_per_sec": 0 00:10:41.952 }, 00:10:41.952 "claimed": true, 00:10:41.952 "claim_type": "exclusive_write", 00:10:41.952 "zoned": false, 00:10:41.952 "supported_io_types": { 00:10:41.952 "read": true, 00:10:41.952 "write": true, 00:10:41.952 "unmap": true, 00:10:41.952 "flush": true, 00:10:41.952 "reset": true, 00:10:41.952 "nvme_admin": false, 00:10:41.952 "nvme_io": false, 00:10:41.952 "nvme_io_md": false, 00:10:41.952 "write_zeroes": true, 00:10:41.952 "zcopy": true, 00:10:41.952 "get_zone_info": false, 00:10:41.952 "zone_management": false, 00:10:41.952 "zone_append": false, 00:10:41.952 "compare": false, 00:10:41.952 "compare_and_write": false, 00:10:41.952 "abort": true, 00:10:41.952 "seek_hole": false, 00:10:41.952 "seek_data": false, 00:10:41.952 "copy": true, 00:10:41.952 "nvme_iov_md": false 00:10:41.952 }, 00:10:41.952 "memory_domains": [ 00:10:41.952 { 00:10:41.952 "dma_device_id": "system", 00:10:41.952 "dma_device_type": 1 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.952 "dma_device_type": 2 00:10:41.952 } 00:10:41.952 ], 00:10:41.952 "driver_specific": {} 00:10:41.952 } 00:10:41.952 ] 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.952 "name": "Existed_Raid", 00:10:41.952 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:41.952 "strip_size_kb": 64, 00:10:41.952 "state": "configuring", 00:10:41.952 "raid_level": "raid0", 00:10:41.952 "superblock": true, 00:10:41.952 "num_base_bdevs": 4, 00:10:41.952 "num_base_bdevs_discovered": 2, 00:10:41.952 "num_base_bdevs_operational": 4, 00:10:41.952 "base_bdevs_list": [ 00:10:41.952 { 00:10:41.952 "name": "BaseBdev1", 00:10:41.952 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:41.952 "is_configured": true, 00:10:41.952 "data_offset": 2048, 00:10:41.952 "data_size": 63488 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "name": "BaseBdev2", 00:10:41.952 "uuid": "da094306-d252-4127-8d69-ee767f1a0a81", 00:10:41.952 "is_configured": true, 00:10:41.952 "data_offset": 2048, 00:10:41.952 "data_size": 63488 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "name": "BaseBdev3", 00:10:41.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.952 "is_configured": false, 00:10:41.952 "data_offset": 0, 00:10:41.952 "data_size": 0 00:10:41.952 }, 00:10:41.952 { 00:10:41.952 "name": "BaseBdev4", 00:10:41.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.952 "is_configured": false, 00:10:41.952 "data_offset": 0, 00:10:41.952 "data_size": 0 00:10:41.952 } 00:10:41.952 ] 00:10:41.952 }' 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.952 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.522 [2024-11-10 15:19:48.709624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.522 BaseBdev3 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.522 [ 00:10:42.522 { 00:10:42.522 "name": "BaseBdev3", 00:10:42.522 "aliases": [ 00:10:42.522 "95585d73-0655-4dfa-b639-18fc2252ef69" 00:10:42.522 ], 00:10:42.522 "product_name": "Malloc disk", 00:10:42.522 "block_size": 512, 00:10:42.522 "num_blocks": 65536, 00:10:42.522 "uuid": "95585d73-0655-4dfa-b639-18fc2252ef69", 00:10:42.522 "assigned_rate_limits": { 00:10:42.522 "rw_ios_per_sec": 0, 00:10:42.522 "rw_mbytes_per_sec": 0, 00:10:42.522 "r_mbytes_per_sec": 0, 00:10:42.522 "w_mbytes_per_sec": 0 00:10:42.522 }, 00:10:42.522 "claimed": true, 00:10:42.522 "claim_type": "exclusive_write", 00:10:42.522 "zoned": false, 00:10:42.522 "supported_io_types": { 00:10:42.522 "read": true, 00:10:42.522 "write": true, 00:10:42.522 "unmap": true, 00:10:42.522 "flush": true, 00:10:42.522 "reset": true, 00:10:42.522 "nvme_admin": false, 00:10:42.522 "nvme_io": false, 00:10:42.522 "nvme_io_md": false, 00:10:42.522 "write_zeroes": true, 00:10:42.522 "zcopy": true, 00:10:42.522 "get_zone_info": false, 00:10:42.522 "zone_management": false, 00:10:42.522 "zone_append": false, 00:10:42.522 "compare": false, 00:10:42.522 "compare_and_write": false, 00:10:42.522 "abort": true, 00:10:42.522 "seek_hole": false, 00:10:42.522 "seek_data": false, 00:10:42.522 "copy": true, 00:10:42.522 "nvme_iov_md": false 00:10:42.522 }, 00:10:42.522 "memory_domains": [ 00:10:42.522 { 00:10:42.522 "dma_device_id": "system", 00:10:42.522 "dma_device_type": 1 00:10:42.522 }, 00:10:42.522 { 00:10:42.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.522 "dma_device_type": 2 00:10:42.522 } 00:10:42.522 ], 00:10:42.522 "driver_specific": {} 00:10:42.522 } 00:10:42.522 ] 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.522 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.523 "name": "Existed_Raid", 00:10:42.523 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:42.523 "strip_size_kb": 64, 00:10:42.523 "state": "configuring", 00:10:42.523 "raid_level": "raid0", 00:10:42.523 "superblock": true, 00:10:42.523 "num_base_bdevs": 4, 00:10:42.523 "num_base_bdevs_discovered": 3, 00:10:42.523 "num_base_bdevs_operational": 4, 00:10:42.523 "base_bdevs_list": [ 00:10:42.523 { 00:10:42.523 "name": "BaseBdev1", 00:10:42.523 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:42.523 "is_configured": true, 00:10:42.523 "data_offset": 2048, 00:10:42.523 "data_size": 63488 00:10:42.523 }, 00:10:42.523 { 00:10:42.523 "name": "BaseBdev2", 00:10:42.523 "uuid": "da094306-d252-4127-8d69-ee767f1a0a81", 00:10:42.523 "is_configured": true, 00:10:42.523 "data_offset": 2048, 00:10:42.523 "data_size": 63488 00:10:42.523 }, 00:10:42.523 { 00:10:42.523 "name": "BaseBdev3", 00:10:42.523 "uuid": "95585d73-0655-4dfa-b639-18fc2252ef69", 00:10:42.523 "is_configured": true, 00:10:42.523 "data_offset": 2048, 00:10:42.523 "data_size": 63488 00:10:42.523 }, 00:10:42.523 { 00:10:42.523 "name": "BaseBdev4", 00:10:42.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.523 "is_configured": false, 00:10:42.523 "data_offset": 0, 00:10:42.523 "data_size": 0 00:10:42.523 } 00:10:42.523 ] 00:10:42.523 }' 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.523 15:19:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.093 BaseBdev4 00:10:43.093 [2024-11-10 15:19:49.261244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:43.093 [2024-11-10 15:19:49.261469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:43.093 [2024-11-10 15:19:49.261494] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.093 [2024-11-10 15:19:49.261855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:43.093 [2024-11-10 15:19:49.262027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:43.093 [2024-11-10 15:19:49.262041] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:10:43.093 [2024-11-10 15:19:49.262174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.093 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.093 [ 00:10:43.093 { 00:10:43.093 "name": "BaseBdev4", 00:10:43.093 "aliases": [ 00:10:43.093 "939e946e-bab4-4fae-bc0e-950cad934cea" 00:10:43.093 ], 00:10:43.093 "product_name": "Malloc disk", 00:10:43.093 "block_size": 512, 00:10:43.093 "num_blocks": 65536, 00:10:43.093 "uuid": "939e946e-bab4-4fae-bc0e-950cad934cea", 00:10:43.093 "assigned_rate_limits": { 00:10:43.093 "rw_ios_per_sec": 0, 00:10:43.093 "rw_mbytes_per_sec": 0, 00:10:43.093 "r_mbytes_per_sec": 0, 00:10:43.093 "w_mbytes_per_sec": 0 00:10:43.093 }, 00:10:43.093 "claimed": true, 00:10:43.093 "claim_type": "exclusive_write", 00:10:43.093 "zoned": false, 00:10:43.093 "supported_io_types": { 00:10:43.093 "read": true, 00:10:43.093 "write": true, 00:10:43.093 "unmap": true, 00:10:43.093 "flush": true, 00:10:43.093 "reset": true, 00:10:43.093 "nvme_admin": false, 00:10:43.093 "nvme_io": false, 00:10:43.093 "nvme_io_md": false, 00:10:43.093 "write_zeroes": true, 00:10:43.093 "zcopy": true, 00:10:43.093 "get_zone_info": false, 00:10:43.093 "zone_management": false, 00:10:43.093 "zone_append": false, 00:10:43.093 "compare": false, 00:10:43.093 "compare_and_write": false, 00:10:43.093 "abort": true, 00:10:43.093 "seek_hole": false, 00:10:43.093 "seek_data": false, 00:10:43.093 "copy": true, 00:10:43.093 "nvme_iov_md": false 00:10:43.093 }, 00:10:43.093 "memory_domains": [ 00:10:43.093 { 00:10:43.093 "dma_device_id": "system", 00:10:43.093 "dma_device_type": 1 00:10:43.093 }, 00:10:43.093 { 00:10:43.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.094 "dma_device_type": 2 00:10:43.094 } 00:10:43.094 ], 00:10:43.094 "driver_specific": {} 00:10:43.094 } 00:10:43.094 ] 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.094 "name": "Existed_Raid", 00:10:43.094 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:43.094 "strip_size_kb": 64, 00:10:43.094 "state": "online", 00:10:43.094 "raid_level": "raid0", 00:10:43.094 "superblock": true, 00:10:43.094 "num_base_bdevs": 4, 00:10:43.094 "num_base_bdevs_discovered": 4, 00:10:43.094 "num_base_bdevs_operational": 4, 00:10:43.094 "base_bdevs_list": [ 00:10:43.094 { 00:10:43.094 "name": "BaseBdev1", 00:10:43.094 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:43.094 "is_configured": true, 00:10:43.094 "data_offset": 2048, 00:10:43.094 "data_size": 63488 00:10:43.094 }, 00:10:43.094 { 00:10:43.094 "name": "BaseBdev2", 00:10:43.094 "uuid": "da094306-d252-4127-8d69-ee767f1a0a81", 00:10:43.094 "is_configured": true, 00:10:43.094 "data_offset": 2048, 00:10:43.094 "data_size": 63488 00:10:43.094 }, 00:10:43.094 { 00:10:43.094 "name": "BaseBdev3", 00:10:43.094 "uuid": "95585d73-0655-4dfa-b639-18fc2252ef69", 00:10:43.094 "is_configured": true, 00:10:43.094 "data_offset": 2048, 00:10:43.094 "data_size": 63488 00:10:43.094 }, 00:10:43.094 { 00:10:43.094 "name": "BaseBdev4", 00:10:43.094 "uuid": "939e946e-bab4-4fae-bc0e-950cad934cea", 00:10:43.094 "is_configured": true, 00:10:43.094 "data_offset": 2048, 00:10:43.094 "data_size": 63488 00:10:43.094 } 00:10:43.094 ] 00:10:43.094 }' 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.094 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.663 [2024-11-10 15:19:49.777884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.663 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.663 "name": "Existed_Raid", 00:10:43.663 "aliases": [ 00:10:43.663 "ac20a7ea-9ff8-456f-952d-89a888f6b051" 00:10:43.663 ], 00:10:43.663 "product_name": "Raid Volume", 00:10:43.663 "block_size": 512, 00:10:43.663 "num_blocks": 253952, 00:10:43.663 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:43.663 "assigned_rate_limits": { 00:10:43.663 "rw_ios_per_sec": 0, 00:10:43.663 "rw_mbytes_per_sec": 0, 00:10:43.663 "r_mbytes_per_sec": 0, 00:10:43.663 "w_mbytes_per_sec": 0 00:10:43.663 }, 00:10:43.663 "claimed": false, 00:10:43.663 "zoned": false, 00:10:43.663 "supported_io_types": { 00:10:43.663 "read": true, 00:10:43.663 "write": true, 00:10:43.663 "unmap": true, 00:10:43.663 "flush": true, 00:10:43.663 "reset": true, 00:10:43.663 "nvme_admin": false, 00:10:43.663 "nvme_io": false, 00:10:43.663 "nvme_io_md": false, 00:10:43.663 "write_zeroes": true, 00:10:43.663 "zcopy": false, 00:10:43.663 "get_zone_info": false, 00:10:43.663 "zone_management": false, 00:10:43.663 "zone_append": false, 00:10:43.663 "compare": false, 00:10:43.663 "compare_and_write": false, 00:10:43.663 "abort": false, 00:10:43.663 "seek_hole": false, 00:10:43.663 "seek_data": false, 00:10:43.663 "copy": false, 00:10:43.663 "nvme_iov_md": false 00:10:43.663 }, 00:10:43.663 "memory_domains": [ 00:10:43.663 { 00:10:43.663 "dma_device_id": "system", 00:10:43.663 "dma_device_type": 1 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.663 "dma_device_type": 2 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "system", 00:10:43.663 "dma_device_type": 1 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.663 "dma_device_type": 2 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "system", 00:10:43.663 "dma_device_type": 1 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.663 "dma_device_type": 2 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "system", 00:10:43.663 "dma_device_type": 1 00:10:43.663 }, 00:10:43.663 { 00:10:43.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.663 "dma_device_type": 2 00:10:43.663 } 00:10:43.663 ], 00:10:43.663 "driver_specific": { 00:10:43.663 "raid": { 00:10:43.663 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:43.663 "strip_size_kb": 64, 00:10:43.663 "state": "online", 00:10:43.663 "raid_level": "raid0", 00:10:43.663 "superblock": true, 00:10:43.663 "num_base_bdevs": 4, 00:10:43.663 "num_base_bdevs_discovered": 4, 00:10:43.663 "num_base_bdevs_operational": 4, 00:10:43.663 "base_bdevs_list": [ 00:10:43.664 { 00:10:43.664 "name": "BaseBdev1", 00:10:43.664 "uuid": "6c3c3f4a-9173-4a33-a03d-ad7fa339945e", 00:10:43.664 "is_configured": true, 00:10:43.664 "data_offset": 2048, 00:10:43.664 "data_size": 63488 00:10:43.664 }, 00:10:43.664 { 00:10:43.664 "name": "BaseBdev2", 00:10:43.664 "uuid": "da094306-d252-4127-8d69-ee767f1a0a81", 00:10:43.664 "is_configured": true, 00:10:43.664 "data_offset": 2048, 00:10:43.664 "data_size": 63488 00:10:43.664 }, 00:10:43.664 { 00:10:43.664 "name": "BaseBdev3", 00:10:43.664 "uuid": "95585d73-0655-4dfa-b639-18fc2252ef69", 00:10:43.664 "is_configured": true, 00:10:43.664 "data_offset": 2048, 00:10:43.664 "data_size": 63488 00:10:43.664 }, 00:10:43.664 { 00:10:43.664 "name": "BaseBdev4", 00:10:43.664 "uuid": "939e946e-bab4-4fae-bc0e-950cad934cea", 00:10:43.664 "is_configured": true, 00:10:43.664 "data_offset": 2048, 00:10:43.664 "data_size": 63488 00:10:43.664 } 00:10:43.664 ] 00:10:43.664 } 00:10:43.664 } 00:10:43.664 }' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:43.664 BaseBdev2 00:10:43.664 BaseBdev3 00:10:43.664 BaseBdev4' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.664 15:19:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.664 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.664 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.664 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.664 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 [2024-11-10 15:19:50.085650] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:43.924 [2024-11-10 15:19:50.085681] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.924 [2024-11-10 15:19:50.085745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.924 "name": "Existed_Raid", 00:10:43.924 "uuid": "ac20a7ea-9ff8-456f-952d-89a888f6b051", 00:10:43.924 "strip_size_kb": 64, 00:10:43.924 "state": "offline", 00:10:43.924 "raid_level": "raid0", 00:10:43.924 "superblock": true, 00:10:43.924 "num_base_bdevs": 4, 00:10:43.924 "num_base_bdevs_discovered": 3, 00:10:43.924 "num_base_bdevs_operational": 3, 00:10:43.924 "base_bdevs_list": [ 00:10:43.924 { 00:10:43.924 "name": null, 00:10:43.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:43.924 "is_configured": false, 00:10:43.924 "data_offset": 0, 00:10:43.924 "data_size": 63488 00:10:43.924 }, 00:10:43.924 { 00:10:43.924 "name": "BaseBdev2", 00:10:43.924 "uuid": "da094306-d252-4127-8d69-ee767f1a0a81", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 }, 00:10:43.924 { 00:10:43.924 "name": "BaseBdev3", 00:10:43.924 "uuid": "95585d73-0655-4dfa-b639-18fc2252ef69", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 }, 00:10:43.924 { 00:10:43.924 "name": "BaseBdev4", 00:10:43.924 "uuid": "939e946e-bab4-4fae-bc0e-950cad934cea", 00:10:43.924 "is_configured": true, 00:10:43.924 "data_offset": 2048, 00:10:43.924 "data_size": 63488 00:10:43.924 } 00:10:43.924 ] 00:10:43.924 }' 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.924 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 [2024-11-10 15:19:50.633390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 [2024-11-10 15:19:50.700829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 [2024-11-10 15:19:50.752269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:44.494 [2024-11-10 15:19:50.752329] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 BaseBdev2 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.494 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.495 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:44.495 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.495 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.755 [ 00:10:44.755 { 00:10:44.755 "name": "BaseBdev2", 00:10:44.755 "aliases": [ 00:10:44.755 "55108960-583d-44ae-97f9-b87c706b3c88" 00:10:44.755 ], 00:10:44.755 "product_name": "Malloc disk", 00:10:44.755 "block_size": 512, 00:10:44.755 "num_blocks": 65536, 00:10:44.755 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:44.755 "assigned_rate_limits": { 00:10:44.755 "rw_ios_per_sec": 0, 00:10:44.755 "rw_mbytes_per_sec": 0, 00:10:44.755 "r_mbytes_per_sec": 0, 00:10:44.755 "w_mbytes_per_sec": 0 00:10:44.755 }, 00:10:44.755 "claimed": false, 00:10:44.755 "zoned": false, 00:10:44.755 "supported_io_types": { 00:10:44.755 "read": true, 00:10:44.755 "write": true, 00:10:44.755 "unmap": true, 00:10:44.755 "flush": true, 00:10:44.755 "reset": true, 00:10:44.755 "nvme_admin": false, 00:10:44.755 "nvme_io": false, 00:10:44.755 "nvme_io_md": false, 00:10:44.755 "write_zeroes": true, 00:10:44.755 "zcopy": true, 00:10:44.755 "get_zone_info": false, 00:10:44.755 "zone_management": false, 00:10:44.755 "zone_append": false, 00:10:44.755 "compare": false, 00:10:44.755 "compare_and_write": false, 00:10:44.755 "abort": true, 00:10:44.755 "seek_hole": false, 00:10:44.755 "seek_data": false, 00:10:44.755 "copy": true, 00:10:44.755 "nvme_iov_md": false 00:10:44.755 }, 00:10:44.755 "memory_domains": [ 00:10:44.755 { 00:10:44.755 "dma_device_id": "system", 00:10:44.755 "dma_device_type": 1 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.755 "dma_device_type": 2 00:10:44.755 } 00:10:44.755 ], 00:10:44.755 "driver_specific": {} 00:10:44.755 } 00:10:44.755 ] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.755 BaseBdev3 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.755 [ 00:10:44.755 { 00:10:44.755 "name": "BaseBdev3", 00:10:44.755 "aliases": [ 00:10:44.755 "7e41b2b7-544a-4968-b59a-74daae428007" 00:10:44.755 ], 00:10:44.755 "product_name": "Malloc disk", 00:10:44.755 "block_size": 512, 00:10:44.755 "num_blocks": 65536, 00:10:44.755 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:44.755 "assigned_rate_limits": { 00:10:44.755 "rw_ios_per_sec": 0, 00:10:44.755 "rw_mbytes_per_sec": 0, 00:10:44.755 "r_mbytes_per_sec": 0, 00:10:44.755 "w_mbytes_per_sec": 0 00:10:44.755 }, 00:10:44.755 "claimed": false, 00:10:44.755 "zoned": false, 00:10:44.755 "supported_io_types": { 00:10:44.755 "read": true, 00:10:44.755 "write": true, 00:10:44.755 "unmap": true, 00:10:44.755 "flush": true, 00:10:44.755 "reset": true, 00:10:44.755 "nvme_admin": false, 00:10:44.755 "nvme_io": false, 00:10:44.755 "nvme_io_md": false, 00:10:44.755 "write_zeroes": true, 00:10:44.755 "zcopy": true, 00:10:44.755 "get_zone_info": false, 00:10:44.755 "zone_management": false, 00:10:44.755 "zone_append": false, 00:10:44.755 "compare": false, 00:10:44.755 "compare_and_write": false, 00:10:44.755 "abort": true, 00:10:44.755 "seek_hole": false, 00:10:44.755 "seek_data": false, 00:10:44.755 "copy": true, 00:10:44.755 "nvme_iov_md": false 00:10:44.755 }, 00:10:44.755 "memory_domains": [ 00:10:44.755 { 00:10:44.755 "dma_device_id": "system", 00:10:44.755 "dma_device_type": 1 00:10:44.755 }, 00:10:44.755 { 00:10:44.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.755 "dma_device_type": 2 00:10:44.755 } 00:10:44.755 ], 00:10:44.755 "driver_specific": {} 00:10:44.755 } 00:10:44.755 ] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.755 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 BaseBdev4 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 [ 00:10:44.756 { 00:10:44.756 "name": "BaseBdev4", 00:10:44.756 "aliases": [ 00:10:44.756 "ff2578c5-4ea2-4b65-bd87-2a16a2633b81" 00:10:44.756 ], 00:10:44.756 "product_name": "Malloc disk", 00:10:44.756 "block_size": 512, 00:10:44.756 "num_blocks": 65536, 00:10:44.756 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:44.756 "assigned_rate_limits": { 00:10:44.756 "rw_ios_per_sec": 0, 00:10:44.756 "rw_mbytes_per_sec": 0, 00:10:44.756 "r_mbytes_per_sec": 0, 00:10:44.756 "w_mbytes_per_sec": 0 00:10:44.756 }, 00:10:44.756 "claimed": false, 00:10:44.756 "zoned": false, 00:10:44.756 "supported_io_types": { 00:10:44.756 "read": true, 00:10:44.756 "write": true, 00:10:44.756 "unmap": true, 00:10:44.756 "flush": true, 00:10:44.756 "reset": true, 00:10:44.756 "nvme_admin": false, 00:10:44.756 "nvme_io": false, 00:10:44.756 "nvme_io_md": false, 00:10:44.756 "write_zeroes": true, 00:10:44.756 "zcopy": true, 00:10:44.756 "get_zone_info": false, 00:10:44.756 "zone_management": false, 00:10:44.756 "zone_append": false, 00:10:44.756 "compare": false, 00:10:44.756 "compare_and_write": false, 00:10:44.756 "abort": true, 00:10:44.756 "seek_hole": false, 00:10:44.756 "seek_data": false, 00:10:44.756 "copy": true, 00:10:44.756 "nvme_iov_md": false 00:10:44.756 }, 00:10:44.756 "memory_domains": [ 00:10:44.756 { 00:10:44.756 "dma_device_id": "system", 00:10:44.756 "dma_device_type": 1 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.756 "dma_device_type": 2 00:10:44.756 } 00:10:44.756 ], 00:10:44.756 "driver_specific": {} 00:10:44.756 } 00:10:44.756 ] 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 [2024-11-10 15:19:50.985559] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.756 [2024-11-10 15:19:50.985666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.756 [2024-11-10 15:19:50.985711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:44.756 [2024-11-10 15:19:50.987660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.756 [2024-11-10 15:19:50.987748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.756 15:19:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.756 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.756 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.756 "name": "Existed_Raid", 00:10:44.756 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:44.756 "strip_size_kb": 64, 00:10:44.756 "state": "configuring", 00:10:44.756 "raid_level": "raid0", 00:10:44.756 "superblock": true, 00:10:44.756 "num_base_bdevs": 4, 00:10:44.756 "num_base_bdevs_discovered": 3, 00:10:44.756 "num_base_bdevs_operational": 4, 00:10:44.756 "base_bdevs_list": [ 00:10:44.756 { 00:10:44.756 "name": "BaseBdev1", 00:10:44.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.756 "is_configured": false, 00:10:44.756 "data_offset": 0, 00:10:44.756 "data_size": 0 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "name": "BaseBdev2", 00:10:44.756 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 2048, 00:10:44.756 "data_size": 63488 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "name": "BaseBdev3", 00:10:44.756 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 2048, 00:10:44.756 "data_size": 63488 00:10:44.756 }, 00:10:44.756 { 00:10:44.756 "name": "BaseBdev4", 00:10:44.756 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:44.756 "is_configured": true, 00:10:44.756 "data_offset": 2048, 00:10:44.756 "data_size": 63488 00:10:44.756 } 00:10:44.756 ] 00:10:44.756 }' 00:10:44.756 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.756 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 [2024-11-10 15:19:51.417665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.327 "name": "Existed_Raid", 00:10:45.327 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:45.327 "strip_size_kb": 64, 00:10:45.327 "state": "configuring", 00:10:45.327 "raid_level": "raid0", 00:10:45.327 "superblock": true, 00:10:45.327 "num_base_bdevs": 4, 00:10:45.327 "num_base_bdevs_discovered": 2, 00:10:45.327 "num_base_bdevs_operational": 4, 00:10:45.327 "base_bdevs_list": [ 00:10:45.327 { 00:10:45.327 "name": "BaseBdev1", 00:10:45.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.327 "is_configured": false, 00:10:45.327 "data_offset": 0, 00:10:45.327 "data_size": 0 00:10:45.327 }, 00:10:45.327 { 00:10:45.327 "name": null, 00:10:45.327 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:45.327 "is_configured": false, 00:10:45.327 "data_offset": 0, 00:10:45.327 "data_size": 63488 00:10:45.327 }, 00:10:45.327 { 00:10:45.327 "name": "BaseBdev3", 00:10:45.327 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:45.327 "is_configured": true, 00:10:45.327 "data_offset": 2048, 00:10:45.327 "data_size": 63488 00:10:45.327 }, 00:10:45.327 { 00:10:45.327 "name": "BaseBdev4", 00:10:45.327 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:45.327 "is_configured": true, 00:10:45.327 "data_offset": 2048, 00:10:45.327 "data_size": 63488 00:10:45.327 } 00:10:45.327 ] 00:10:45.327 }' 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.327 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 [2024-11-10 15:19:51.876810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.587 BaseBdev1 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.587 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.587 [ 00:10:45.587 { 00:10:45.587 "name": "BaseBdev1", 00:10:45.587 "aliases": [ 00:10:45.587 "49c7df36-0247-4b02-b376-f05695656e51" 00:10:45.587 ], 00:10:45.587 "product_name": "Malloc disk", 00:10:45.587 "block_size": 512, 00:10:45.587 "num_blocks": 65536, 00:10:45.587 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:45.587 "assigned_rate_limits": { 00:10:45.587 "rw_ios_per_sec": 0, 00:10:45.587 "rw_mbytes_per_sec": 0, 00:10:45.587 "r_mbytes_per_sec": 0, 00:10:45.587 "w_mbytes_per_sec": 0 00:10:45.587 }, 00:10:45.587 "claimed": true, 00:10:45.587 "claim_type": "exclusive_write", 00:10:45.587 "zoned": false, 00:10:45.587 "supported_io_types": { 00:10:45.587 "read": true, 00:10:45.587 "write": true, 00:10:45.587 "unmap": true, 00:10:45.587 "flush": true, 00:10:45.587 "reset": true, 00:10:45.587 "nvme_admin": false, 00:10:45.587 "nvme_io": false, 00:10:45.587 "nvme_io_md": false, 00:10:45.587 "write_zeroes": true, 00:10:45.587 "zcopy": true, 00:10:45.587 "get_zone_info": false, 00:10:45.587 "zone_management": false, 00:10:45.587 "zone_append": false, 00:10:45.587 "compare": false, 00:10:45.587 "compare_and_write": false, 00:10:45.587 "abort": true, 00:10:45.587 "seek_hole": false, 00:10:45.587 "seek_data": false, 00:10:45.588 "copy": true, 00:10:45.588 "nvme_iov_md": false 00:10:45.588 }, 00:10:45.588 "memory_domains": [ 00:10:45.588 { 00:10:45.588 "dma_device_id": "system", 00:10:45.588 "dma_device_type": 1 00:10:45.588 }, 00:10:45.588 { 00:10:45.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.588 "dma_device_type": 2 00:10:45.588 } 00:10:45.588 ], 00:10:45.588 "driver_specific": {} 00:10:45.588 } 00:10:45.588 ] 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.588 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.847 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.847 "name": "Existed_Raid", 00:10:45.847 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:45.847 "strip_size_kb": 64, 00:10:45.847 "state": "configuring", 00:10:45.847 "raid_level": "raid0", 00:10:45.847 "superblock": true, 00:10:45.847 "num_base_bdevs": 4, 00:10:45.847 "num_base_bdevs_discovered": 3, 00:10:45.847 "num_base_bdevs_operational": 4, 00:10:45.847 "base_bdevs_list": [ 00:10:45.847 { 00:10:45.847 "name": "BaseBdev1", 00:10:45.847 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:45.847 "is_configured": true, 00:10:45.847 "data_offset": 2048, 00:10:45.847 "data_size": 63488 00:10:45.847 }, 00:10:45.847 { 00:10:45.847 "name": null, 00:10:45.847 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:45.847 "is_configured": false, 00:10:45.847 "data_offset": 0, 00:10:45.847 "data_size": 63488 00:10:45.847 }, 00:10:45.847 { 00:10:45.847 "name": "BaseBdev3", 00:10:45.847 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:45.847 "is_configured": true, 00:10:45.847 "data_offset": 2048, 00:10:45.847 "data_size": 63488 00:10:45.847 }, 00:10:45.847 { 00:10:45.847 "name": "BaseBdev4", 00:10:45.847 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:45.847 "is_configured": true, 00:10:45.847 "data_offset": 2048, 00:10:45.847 "data_size": 63488 00:10:45.847 } 00:10:45.847 ] 00:10:45.847 }' 00:10:45.848 15:19:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.848 15:19:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.108 [2024-11-10 15:19:52.409047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.108 "name": "Existed_Raid", 00:10:46.108 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:46.108 "strip_size_kb": 64, 00:10:46.108 "state": "configuring", 00:10:46.108 "raid_level": "raid0", 00:10:46.108 "superblock": true, 00:10:46.108 "num_base_bdevs": 4, 00:10:46.108 "num_base_bdevs_discovered": 2, 00:10:46.108 "num_base_bdevs_operational": 4, 00:10:46.108 "base_bdevs_list": [ 00:10:46.108 { 00:10:46.108 "name": "BaseBdev1", 00:10:46.108 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:46.108 "is_configured": true, 00:10:46.108 "data_offset": 2048, 00:10:46.108 "data_size": 63488 00:10:46.108 }, 00:10:46.108 { 00:10:46.108 "name": null, 00:10:46.108 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:46.108 "is_configured": false, 00:10:46.108 "data_offset": 0, 00:10:46.108 "data_size": 63488 00:10:46.108 }, 00:10:46.108 { 00:10:46.108 "name": null, 00:10:46.108 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:46.108 "is_configured": false, 00:10:46.108 "data_offset": 0, 00:10:46.108 "data_size": 63488 00:10:46.108 }, 00:10:46.108 { 00:10:46.108 "name": "BaseBdev4", 00:10:46.108 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:46.108 "is_configured": true, 00:10:46.108 "data_offset": 2048, 00:10:46.108 "data_size": 63488 00:10:46.108 } 00:10:46.108 ] 00:10:46.108 }' 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.108 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.678 [2024-11-10 15:19:52.893258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.678 "name": "Existed_Raid", 00:10:46.678 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:46.678 "strip_size_kb": 64, 00:10:46.678 "state": "configuring", 00:10:46.678 "raid_level": "raid0", 00:10:46.678 "superblock": true, 00:10:46.678 "num_base_bdevs": 4, 00:10:46.678 "num_base_bdevs_discovered": 3, 00:10:46.678 "num_base_bdevs_operational": 4, 00:10:46.678 "base_bdevs_list": [ 00:10:46.678 { 00:10:46.678 "name": "BaseBdev1", 00:10:46.678 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:46.678 "is_configured": true, 00:10:46.678 "data_offset": 2048, 00:10:46.678 "data_size": 63488 00:10:46.678 }, 00:10:46.678 { 00:10:46.678 "name": null, 00:10:46.678 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:46.678 "is_configured": false, 00:10:46.678 "data_offset": 0, 00:10:46.678 "data_size": 63488 00:10:46.678 }, 00:10:46.678 { 00:10:46.678 "name": "BaseBdev3", 00:10:46.678 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:46.678 "is_configured": true, 00:10:46.678 "data_offset": 2048, 00:10:46.678 "data_size": 63488 00:10:46.678 }, 00:10:46.678 { 00:10:46.678 "name": "BaseBdev4", 00:10:46.678 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:46.678 "is_configured": true, 00:10:46.678 "data_offset": 2048, 00:10:46.678 "data_size": 63488 00:10:46.678 } 00:10:46.678 ] 00:10:46.678 }' 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.678 15:19:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.247 [2024-11-10 15:19:53.401429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.247 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.248 "name": "Existed_Raid", 00:10:47.248 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:47.248 "strip_size_kb": 64, 00:10:47.248 "state": "configuring", 00:10:47.248 "raid_level": "raid0", 00:10:47.248 "superblock": true, 00:10:47.248 "num_base_bdevs": 4, 00:10:47.248 "num_base_bdevs_discovered": 2, 00:10:47.248 "num_base_bdevs_operational": 4, 00:10:47.248 "base_bdevs_list": [ 00:10:47.248 { 00:10:47.248 "name": null, 00:10:47.248 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:47.248 "is_configured": false, 00:10:47.248 "data_offset": 0, 00:10:47.248 "data_size": 63488 00:10:47.248 }, 00:10:47.248 { 00:10:47.248 "name": null, 00:10:47.248 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:47.248 "is_configured": false, 00:10:47.248 "data_offset": 0, 00:10:47.248 "data_size": 63488 00:10:47.248 }, 00:10:47.248 { 00:10:47.248 "name": "BaseBdev3", 00:10:47.248 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:47.248 "is_configured": true, 00:10:47.248 "data_offset": 2048, 00:10:47.248 "data_size": 63488 00:10:47.248 }, 00:10:47.248 { 00:10:47.248 "name": "BaseBdev4", 00:10:47.248 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:47.248 "is_configured": true, 00:10:47.248 "data_offset": 2048, 00:10:47.248 "data_size": 63488 00:10:47.248 } 00:10:47.248 ] 00:10:47.248 }' 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.248 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.508 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 [2024-11-10 15:19:53.868187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.768 "name": "Existed_Raid", 00:10:47.768 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:47.768 "strip_size_kb": 64, 00:10:47.768 "state": "configuring", 00:10:47.768 "raid_level": "raid0", 00:10:47.768 "superblock": true, 00:10:47.768 "num_base_bdevs": 4, 00:10:47.768 "num_base_bdevs_discovered": 3, 00:10:47.768 "num_base_bdevs_operational": 4, 00:10:47.768 "base_bdevs_list": [ 00:10:47.768 { 00:10:47.768 "name": null, 00:10:47.768 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:47.768 "is_configured": false, 00:10:47.768 "data_offset": 0, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev2", 00:10:47.768 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev3", 00:10:47.768 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 }, 00:10:47.768 { 00:10:47.768 "name": "BaseBdev4", 00:10:47.768 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:47.768 "is_configured": true, 00:10:47.768 "data_offset": 2048, 00:10:47.768 "data_size": 63488 00:10:47.768 } 00:10:47.768 ] 00:10:47.768 }' 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.768 15:19:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.028 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 49c7df36-0247-4b02-b376-f05695656e51 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.288 [2024-11-10 15:19:54.403525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:48.288 NewBaseBdev 00:10:48.288 [2024-11-10 15:19:54.403822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.288 [2024-11-10 15:19:54.403851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.288 [2024-11-10 15:19:54.404148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:10:48.288 [2024-11-10 15:19:54.404280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.288 [2024-11-10 15:19:54.404292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:48.288 [2024-11-10 15:19:54.404399] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.288 [ 00:10:48.288 { 00:10:48.288 "name": "NewBaseBdev", 00:10:48.288 "aliases": [ 00:10:48.288 "49c7df36-0247-4b02-b376-f05695656e51" 00:10:48.288 ], 00:10:48.288 "product_name": "Malloc disk", 00:10:48.288 "block_size": 512, 00:10:48.288 "num_blocks": 65536, 00:10:48.288 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:48.288 "assigned_rate_limits": { 00:10:48.288 "rw_ios_per_sec": 0, 00:10:48.288 "rw_mbytes_per_sec": 0, 00:10:48.288 "r_mbytes_per_sec": 0, 00:10:48.288 "w_mbytes_per_sec": 0 00:10:48.288 }, 00:10:48.288 "claimed": true, 00:10:48.288 "claim_type": "exclusive_write", 00:10:48.288 "zoned": false, 00:10:48.288 "supported_io_types": { 00:10:48.288 "read": true, 00:10:48.288 "write": true, 00:10:48.288 "unmap": true, 00:10:48.288 "flush": true, 00:10:48.288 "reset": true, 00:10:48.288 "nvme_admin": false, 00:10:48.288 "nvme_io": false, 00:10:48.288 "nvme_io_md": false, 00:10:48.288 "write_zeroes": true, 00:10:48.288 "zcopy": true, 00:10:48.288 "get_zone_info": false, 00:10:48.288 "zone_management": false, 00:10:48.288 "zone_append": false, 00:10:48.288 "compare": false, 00:10:48.288 "compare_and_write": false, 00:10:48.288 "abort": true, 00:10:48.288 "seek_hole": false, 00:10:48.288 "seek_data": false, 00:10:48.288 "copy": true, 00:10:48.288 "nvme_iov_md": false 00:10:48.288 }, 00:10:48.288 "memory_domains": [ 00:10:48.288 { 00:10:48.288 "dma_device_id": "system", 00:10:48.288 "dma_device_type": 1 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.288 "dma_device_type": 2 00:10:48.288 } 00:10:48.288 ], 00:10:48.288 "driver_specific": {} 00:10:48.288 } 00:10:48.288 ] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.288 "name": "Existed_Raid", 00:10:48.288 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:48.288 "strip_size_kb": 64, 00:10:48.288 "state": "online", 00:10:48.288 "raid_level": "raid0", 00:10:48.288 "superblock": true, 00:10:48.288 "num_base_bdevs": 4, 00:10:48.288 "num_base_bdevs_discovered": 4, 00:10:48.288 "num_base_bdevs_operational": 4, 00:10:48.288 "base_bdevs_list": [ 00:10:48.288 { 00:10:48.288 "name": "NewBaseBdev", 00:10:48.288 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 2048, 00:10:48.288 "data_size": 63488 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "name": "BaseBdev2", 00:10:48.288 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 2048, 00:10:48.288 "data_size": 63488 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "name": "BaseBdev3", 00:10:48.288 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 2048, 00:10:48.288 "data_size": 63488 00:10:48.288 }, 00:10:48.288 { 00:10:48.288 "name": "BaseBdev4", 00:10:48.288 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:48.288 "is_configured": true, 00:10:48.288 "data_offset": 2048, 00:10:48.288 "data_size": 63488 00:10:48.288 } 00:10:48.288 ] 00:10:48.288 }' 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.288 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:48.549 [2024-11-10 15:19:54.872130] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.549 15:19:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.809 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:48.809 "name": "Existed_Raid", 00:10:48.809 "aliases": [ 00:10:48.809 "473d1304-b6f0-4618-9083-accb40c93b92" 00:10:48.809 ], 00:10:48.809 "product_name": "Raid Volume", 00:10:48.809 "block_size": 512, 00:10:48.809 "num_blocks": 253952, 00:10:48.809 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:48.809 "assigned_rate_limits": { 00:10:48.809 "rw_ios_per_sec": 0, 00:10:48.809 "rw_mbytes_per_sec": 0, 00:10:48.809 "r_mbytes_per_sec": 0, 00:10:48.809 "w_mbytes_per_sec": 0 00:10:48.809 }, 00:10:48.809 "claimed": false, 00:10:48.809 "zoned": false, 00:10:48.809 "supported_io_types": { 00:10:48.809 "read": true, 00:10:48.809 "write": true, 00:10:48.809 "unmap": true, 00:10:48.809 "flush": true, 00:10:48.809 "reset": true, 00:10:48.809 "nvme_admin": false, 00:10:48.809 "nvme_io": false, 00:10:48.809 "nvme_io_md": false, 00:10:48.809 "write_zeroes": true, 00:10:48.809 "zcopy": false, 00:10:48.809 "get_zone_info": false, 00:10:48.809 "zone_management": false, 00:10:48.809 "zone_append": false, 00:10:48.809 "compare": false, 00:10:48.809 "compare_and_write": false, 00:10:48.809 "abort": false, 00:10:48.809 "seek_hole": false, 00:10:48.809 "seek_data": false, 00:10:48.809 "copy": false, 00:10:48.809 "nvme_iov_md": false 00:10:48.809 }, 00:10:48.809 "memory_domains": [ 00:10:48.809 { 00:10:48.809 "dma_device_id": "system", 00:10:48.809 "dma_device_type": 1 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.809 "dma_device_type": 2 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "system", 00:10:48.809 "dma_device_type": 1 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.809 "dma_device_type": 2 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "system", 00:10:48.809 "dma_device_type": 1 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.809 "dma_device_type": 2 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "system", 00:10:48.809 "dma_device_type": 1 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.809 "dma_device_type": 2 00:10:48.809 } 00:10:48.809 ], 00:10:48.809 "driver_specific": { 00:10:48.809 "raid": { 00:10:48.809 "uuid": "473d1304-b6f0-4618-9083-accb40c93b92", 00:10:48.809 "strip_size_kb": 64, 00:10:48.809 "state": "online", 00:10:48.809 "raid_level": "raid0", 00:10:48.809 "superblock": true, 00:10:48.809 "num_base_bdevs": 4, 00:10:48.809 "num_base_bdevs_discovered": 4, 00:10:48.809 "num_base_bdevs_operational": 4, 00:10:48.809 "base_bdevs_list": [ 00:10:48.809 { 00:10:48.809 "name": "NewBaseBdev", 00:10:48.809 "uuid": "49c7df36-0247-4b02-b376-f05695656e51", 00:10:48.809 "is_configured": true, 00:10:48.809 "data_offset": 2048, 00:10:48.809 "data_size": 63488 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "name": "BaseBdev2", 00:10:48.809 "uuid": "55108960-583d-44ae-97f9-b87c706b3c88", 00:10:48.809 "is_configured": true, 00:10:48.809 "data_offset": 2048, 00:10:48.809 "data_size": 63488 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "name": "BaseBdev3", 00:10:48.809 "uuid": "7e41b2b7-544a-4968-b59a-74daae428007", 00:10:48.809 "is_configured": true, 00:10:48.809 "data_offset": 2048, 00:10:48.809 "data_size": 63488 00:10:48.809 }, 00:10:48.809 { 00:10:48.809 "name": "BaseBdev4", 00:10:48.809 "uuid": "ff2578c5-4ea2-4b65-bd87-2a16a2633b81", 00:10:48.809 "is_configured": true, 00:10:48.809 "data_offset": 2048, 00:10:48.809 "data_size": 63488 00:10:48.809 } 00:10:48.809 ] 00:10:48.809 } 00:10:48.809 } 00:10:48.809 }' 00:10:48.809 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.809 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:48.809 BaseBdev2 00:10:48.809 BaseBdev3 00:10:48.809 BaseBdev4' 00:10:48.809 15:19:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.809 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.069 [2024-11-10 15:19:55.191807] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.069 [2024-11-10 15:19:55.191882] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.069 [2024-11-10 15:19:55.191988] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.069 [2024-11-10 15:19:55.192079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.069 [2024-11-10 15:19:55.192100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82385 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 82385 ']' 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 82385 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82385 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82385' 00:10:49.069 killing process with pid 82385 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 82385 00:10:49.069 [2024-11-10 15:19:55.242224] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.069 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 82385 00:10:49.069 [2024-11-10 15:19:55.284476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:49.329 15:19:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:49.329 00:10:49.329 real 0m9.841s 00:10:49.329 user 0m16.874s 00:10:49.329 sys 0m2.024s 00:10:49.329 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:49.330 15:19:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.330 ************************************ 00:10:49.330 END TEST raid_state_function_test_sb 00:10:49.330 ************************************ 00:10:49.330 15:19:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:49.330 15:19:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:49.330 15:19:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:49.330 15:19:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:49.330 ************************************ 00:10:49.330 START TEST raid_superblock_test 00:10:49.330 ************************************ 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid0 4 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83040 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83040 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 83040 ']' 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:49.330 15:19:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.330 [2024-11-10 15:19:55.659004] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:49.330 [2024-11-10 15:19:55.659258] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83040 ] 00:10:49.590 [2024-11-10 15:19:55.794757] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.590 [2024-11-10 15:19:55.832881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:49.590 [2024-11-10 15:19:55.859189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.590 [2024-11-10 15:19:55.902515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.590 [2024-11-10 15:19:55.902639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.159 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 malloc1 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 [2024-11-10 15:19:56.538586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.420 [2024-11-10 15:19:56.538701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.420 [2024-11-10 15:19:56.538757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:50.420 [2024-11-10 15:19:56.538796] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.420 [2024-11-10 15:19:56.541027] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.420 [2024-11-10 15:19:56.541111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.420 pt1 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 malloc2 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 [2024-11-10 15:19:56.571426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:50.420 [2024-11-10 15:19:56.571521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.420 [2024-11-10 15:19:56.571557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:50.420 [2024-11-10 15:19:56.571586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.420 [2024-11-10 15:19:56.573704] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.420 [2024-11-10 15:19:56.573776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:50.420 pt2 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 malloc3 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 [2024-11-10 15:19:56.604239] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:50.420 [2024-11-10 15:19:56.604332] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.420 [2024-11-10 15:19:56.604357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:50.420 [2024-11-10 15:19:56.604367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.420 [2024-11-10 15:19:56.606637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.420 [2024-11-10 15:19:56.606673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:50.420 pt3 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 malloc4 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 [2024-11-10 15:19:56.646584] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:50.420 [2024-11-10 15:19:56.646677] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.420 [2024-11-10 15:19:56.646716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:50.420 [2024-11-10 15:19:56.646746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.420 [2024-11-10 15:19:56.648862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.420 [2024-11-10 15:19:56.648938] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:50.420 pt4 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.420 [2024-11-10 15:19:56.658643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.420 [2024-11-10 15:19:56.660564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:50.420 [2024-11-10 15:19:56.660678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:50.420 [2024-11-10 15:19:56.660761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:50.420 [2024-11-10 15:19:56.660940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:10:50.420 [2024-11-10 15:19:56.660996] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:50.420 [2024-11-10 15:19:56.661284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:10:50.420 [2024-11-10 15:19:56.661481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:10:50.420 [2024-11-10 15:19:56.661528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:10:50.420 [2024-11-10 15:19:56.661687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:50.420 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.421 "name": "raid_bdev1", 00:10:50.421 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:50.421 "strip_size_kb": 64, 00:10:50.421 "state": "online", 00:10:50.421 "raid_level": "raid0", 00:10:50.421 "superblock": true, 00:10:50.421 "num_base_bdevs": 4, 00:10:50.421 "num_base_bdevs_discovered": 4, 00:10:50.421 "num_base_bdevs_operational": 4, 00:10:50.421 "base_bdevs_list": [ 00:10:50.421 { 00:10:50.421 "name": "pt1", 00:10:50.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.421 "is_configured": true, 00:10:50.421 "data_offset": 2048, 00:10:50.421 "data_size": 63488 00:10:50.421 }, 00:10:50.421 { 00:10:50.421 "name": "pt2", 00:10:50.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.421 "is_configured": true, 00:10:50.421 "data_offset": 2048, 00:10:50.421 "data_size": 63488 00:10:50.421 }, 00:10:50.421 { 00:10:50.421 "name": "pt3", 00:10:50.421 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.421 "is_configured": true, 00:10:50.421 "data_offset": 2048, 00:10:50.421 "data_size": 63488 00:10:50.421 }, 00:10:50.421 { 00:10:50.421 "name": "pt4", 00:10:50.421 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.421 "is_configured": true, 00:10:50.421 "data_offset": 2048, 00:10:50.421 "data_size": 63488 00:10:50.421 } 00:10:50.421 ] 00:10:50.421 }' 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.421 15:19:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.991 [2024-11-10 15:19:57.095133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.991 "name": "raid_bdev1", 00:10:50.991 "aliases": [ 00:10:50.991 "e7600165-8ac8-4d54-99a5-7e8c203d3401" 00:10:50.991 ], 00:10:50.991 "product_name": "Raid Volume", 00:10:50.991 "block_size": 512, 00:10:50.991 "num_blocks": 253952, 00:10:50.991 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:50.991 "assigned_rate_limits": { 00:10:50.991 "rw_ios_per_sec": 0, 00:10:50.991 "rw_mbytes_per_sec": 0, 00:10:50.991 "r_mbytes_per_sec": 0, 00:10:50.991 "w_mbytes_per_sec": 0 00:10:50.991 }, 00:10:50.991 "claimed": false, 00:10:50.991 "zoned": false, 00:10:50.991 "supported_io_types": { 00:10:50.991 "read": true, 00:10:50.991 "write": true, 00:10:50.991 "unmap": true, 00:10:50.991 "flush": true, 00:10:50.991 "reset": true, 00:10:50.991 "nvme_admin": false, 00:10:50.991 "nvme_io": false, 00:10:50.991 "nvme_io_md": false, 00:10:50.991 "write_zeroes": true, 00:10:50.991 "zcopy": false, 00:10:50.991 "get_zone_info": false, 00:10:50.991 "zone_management": false, 00:10:50.991 "zone_append": false, 00:10:50.991 "compare": false, 00:10:50.991 "compare_and_write": false, 00:10:50.991 "abort": false, 00:10:50.991 "seek_hole": false, 00:10:50.991 "seek_data": false, 00:10:50.991 "copy": false, 00:10:50.991 "nvme_iov_md": false 00:10:50.991 }, 00:10:50.991 "memory_domains": [ 00:10:50.991 { 00:10:50.991 "dma_device_id": "system", 00:10:50.991 "dma_device_type": 1 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.991 "dma_device_type": 2 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "system", 00:10:50.991 "dma_device_type": 1 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.991 "dma_device_type": 2 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "system", 00:10:50.991 "dma_device_type": 1 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.991 "dma_device_type": 2 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "system", 00:10:50.991 "dma_device_type": 1 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.991 "dma_device_type": 2 00:10:50.991 } 00:10:50.991 ], 00:10:50.991 "driver_specific": { 00:10:50.991 "raid": { 00:10:50.991 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:50.991 "strip_size_kb": 64, 00:10:50.991 "state": "online", 00:10:50.991 "raid_level": "raid0", 00:10:50.991 "superblock": true, 00:10:50.991 "num_base_bdevs": 4, 00:10:50.991 "num_base_bdevs_discovered": 4, 00:10:50.991 "num_base_bdevs_operational": 4, 00:10:50.991 "base_bdevs_list": [ 00:10:50.991 { 00:10:50.991 "name": "pt1", 00:10:50.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.991 "is_configured": true, 00:10:50.991 "data_offset": 2048, 00:10:50.991 "data_size": 63488 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "name": "pt2", 00:10:50.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.991 "is_configured": true, 00:10:50.991 "data_offset": 2048, 00:10:50.991 "data_size": 63488 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "name": "pt3", 00:10:50.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.991 "is_configured": true, 00:10:50.991 "data_offset": 2048, 00:10:50.991 "data_size": 63488 00:10:50.991 }, 00:10:50.991 { 00:10:50.991 "name": "pt4", 00:10:50.991 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.991 "is_configured": true, 00:10:50.991 "data_offset": 2048, 00:10:50.991 "data_size": 63488 00:10:50.991 } 00:10:50.991 ] 00:10:50.991 } 00:10:50.991 } 00:10:50.991 }' 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:50.991 pt2 00:10:50.991 pt3 00:10:50.991 pt4' 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.991 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.992 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 [2024-11-10 15:19:57.391170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e7600165-8ac8-4d54-99a5-7e8c203d3401 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e7600165-8ac8-4d54-99a5-7e8c203d3401 ']' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 [2024-11-10 15:19:57.434809] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.252 [2024-11-10 15:19:57.434837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.252 [2024-11-10 15:19:57.434924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.252 [2024-11-10 15:19:57.434996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.252 [2024-11-10 15:19:57.435022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.252 [2024-11-10 15:19:57.594904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:51.252 [2024-11-10 15:19:57.596999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:51.252 [2024-11-10 15:19:57.597059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:51.252 [2024-11-10 15:19:57.597090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:51.252 [2024-11-10 15:19:57.597135] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:51.252 [2024-11-10 15:19:57.597182] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:51.252 [2024-11-10 15:19:57.597200] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:51.252 [2024-11-10 15:19:57.597218] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:51.252 [2024-11-10 15:19:57.597231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.252 [2024-11-10 15:19:57.597242] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:10:51.252 request: 00:10:51.252 { 00:10:51.252 "name": "raid_bdev1", 00:10:51.252 "raid_level": "raid0", 00:10:51.252 "base_bdevs": [ 00:10:51.252 "malloc1", 00:10:51.252 "malloc2", 00:10:51.252 "malloc3", 00:10:51.252 "malloc4" 00:10:51.252 ], 00:10:51.252 "strip_size_kb": 64, 00:10:51.252 "superblock": false, 00:10:51.252 "method": "bdev_raid_create", 00:10:51.252 "req_id": 1 00:10:51.252 } 00:10:51.252 Got JSON-RPC error response 00:10:51.252 response: 00:10:51.252 { 00:10:51.252 "code": -17, 00:10:51.252 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:51.252 } 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.252 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.515 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.515 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:51.515 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:51.515 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:51.515 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.515 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.515 [2024-11-10 15:19:57.662883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:51.516 [2024-11-10 15:19:57.663005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.516 [2024-11-10 15:19:57.663059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:51.516 [2024-11-10 15:19:57.663099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.516 [2024-11-10 15:19:57.665487] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.516 [2024-11-10 15:19:57.665567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:51.516 [2024-11-10 15:19:57.665673] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:51.516 [2024-11-10 15:19:57.665757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:51.516 pt1 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.516 "name": "raid_bdev1", 00:10:51.516 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:51.516 "strip_size_kb": 64, 00:10:51.516 "state": "configuring", 00:10:51.516 "raid_level": "raid0", 00:10:51.516 "superblock": true, 00:10:51.516 "num_base_bdevs": 4, 00:10:51.516 "num_base_bdevs_discovered": 1, 00:10:51.516 "num_base_bdevs_operational": 4, 00:10:51.516 "base_bdevs_list": [ 00:10:51.516 { 00:10:51.516 "name": "pt1", 00:10:51.516 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.516 "is_configured": true, 00:10:51.516 "data_offset": 2048, 00:10:51.516 "data_size": 63488 00:10:51.516 }, 00:10:51.516 { 00:10:51.516 "name": null, 00:10:51.516 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.516 "is_configured": false, 00:10:51.516 "data_offset": 2048, 00:10:51.516 "data_size": 63488 00:10:51.516 }, 00:10:51.516 { 00:10:51.516 "name": null, 00:10:51.516 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.516 "is_configured": false, 00:10:51.516 "data_offset": 2048, 00:10:51.516 "data_size": 63488 00:10:51.516 }, 00:10:51.516 { 00:10:51.516 "name": null, 00:10:51.516 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.516 "is_configured": false, 00:10:51.516 "data_offset": 2048, 00:10:51.516 "data_size": 63488 00:10:51.516 } 00:10:51.516 ] 00:10:51.516 }' 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.516 15:19:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.776 [2024-11-10 15:19:58.091037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.776 [2024-11-10 15:19:58.091102] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.776 [2024-11-10 15:19:58.091123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:51.776 [2024-11-10 15:19:58.091135] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.776 [2024-11-10 15:19:58.091603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.776 [2024-11-10 15:19:58.091636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.776 [2024-11-10 15:19:58.091720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.776 [2024-11-10 15:19:58.091748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.776 pt2 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.776 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.776 [2024-11-10 15:19:58.099007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.777 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.036 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.036 "name": "raid_bdev1", 00:10:52.036 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:52.036 "strip_size_kb": 64, 00:10:52.036 "state": "configuring", 00:10:52.036 "raid_level": "raid0", 00:10:52.036 "superblock": true, 00:10:52.036 "num_base_bdevs": 4, 00:10:52.036 "num_base_bdevs_discovered": 1, 00:10:52.036 "num_base_bdevs_operational": 4, 00:10:52.036 "base_bdevs_list": [ 00:10:52.036 { 00:10:52.036 "name": "pt1", 00:10:52.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.036 "is_configured": true, 00:10:52.036 "data_offset": 2048, 00:10:52.036 "data_size": 63488 00:10:52.036 }, 00:10:52.036 { 00:10:52.036 "name": null, 00:10:52.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.036 "is_configured": false, 00:10:52.036 "data_offset": 0, 00:10:52.036 "data_size": 63488 00:10:52.036 }, 00:10:52.036 { 00:10:52.036 "name": null, 00:10:52.036 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.036 "is_configured": false, 00:10:52.036 "data_offset": 2048, 00:10:52.036 "data_size": 63488 00:10:52.036 }, 00:10:52.036 { 00:10:52.036 "name": null, 00:10:52.036 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.036 "is_configured": false, 00:10:52.036 "data_offset": 2048, 00:10:52.036 "data_size": 63488 00:10:52.036 } 00:10:52.036 ] 00:10:52.036 }' 00:10:52.036 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.036 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.296 [2024-11-10 15:19:58.547166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:52.296 [2024-11-10 15:19:58.547247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.296 [2024-11-10 15:19:58.547271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:52.296 [2024-11-10 15:19:58.547281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.296 [2024-11-10 15:19:58.547719] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.296 [2024-11-10 15:19:58.547750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:52.296 [2024-11-10 15:19:58.547840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:52.296 [2024-11-10 15:19:58.547864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:52.296 pt2 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.296 [2024-11-10 15:19:58.559137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:52.296 [2024-11-10 15:19:58.559191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.296 [2024-11-10 15:19:58.559210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:52.296 [2024-11-10 15:19:58.559226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.296 [2024-11-10 15:19:58.559604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.296 [2024-11-10 15:19:58.559632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:52.296 [2024-11-10 15:19:58.559701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:52.296 [2024-11-10 15:19:58.559722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:52.296 pt3 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.296 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.297 [2024-11-10 15:19:58.571127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:52.297 [2024-11-10 15:19:58.571179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.297 [2024-11-10 15:19:58.571199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:52.297 [2024-11-10 15:19:58.571207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.297 [2024-11-10 15:19:58.571537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.297 [2024-11-10 15:19:58.571562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:52.297 [2024-11-10 15:19:58.571643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:52.297 [2024-11-10 15:19:58.571664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:52.297 [2024-11-10 15:19:58.571776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:10:52.297 [2024-11-10 15:19:58.571791] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.297 [2024-11-10 15:19:58.572101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:52.297 [2024-11-10 15:19:58.572274] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:10:52.297 [2024-11-10 15:19:58.572325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:10:52.297 [2024-11-10 15:19:58.572478] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.297 pt4 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.297 "name": "raid_bdev1", 00:10:52.297 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:52.297 "strip_size_kb": 64, 00:10:52.297 "state": "online", 00:10:52.297 "raid_level": "raid0", 00:10:52.297 "superblock": true, 00:10:52.297 "num_base_bdevs": 4, 00:10:52.297 "num_base_bdevs_discovered": 4, 00:10:52.297 "num_base_bdevs_operational": 4, 00:10:52.297 "base_bdevs_list": [ 00:10:52.297 { 00:10:52.297 "name": "pt1", 00:10:52.297 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.297 "is_configured": true, 00:10:52.297 "data_offset": 2048, 00:10:52.297 "data_size": 63488 00:10:52.297 }, 00:10:52.297 { 00:10:52.297 "name": "pt2", 00:10:52.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.297 "is_configured": true, 00:10:52.297 "data_offset": 2048, 00:10:52.297 "data_size": 63488 00:10:52.297 }, 00:10:52.297 { 00:10:52.297 "name": "pt3", 00:10:52.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.297 "is_configured": true, 00:10:52.297 "data_offset": 2048, 00:10:52.297 "data_size": 63488 00:10:52.297 }, 00:10:52.297 { 00:10:52.297 "name": "pt4", 00:10:52.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.297 "is_configured": true, 00:10:52.297 "data_offset": 2048, 00:10:52.297 "data_size": 63488 00:10:52.297 } 00:10:52.297 ] 00:10:52.297 }' 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.297 15:19:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.867 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.867 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.867 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.867 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.867 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.867 15:19:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.867 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.867 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.867 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.867 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.867 [2024-11-10 15:19:59.007657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.867 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.867 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.867 "name": "raid_bdev1", 00:10:52.867 "aliases": [ 00:10:52.867 "e7600165-8ac8-4d54-99a5-7e8c203d3401" 00:10:52.867 ], 00:10:52.867 "product_name": "Raid Volume", 00:10:52.867 "block_size": 512, 00:10:52.867 "num_blocks": 253952, 00:10:52.867 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:52.867 "assigned_rate_limits": { 00:10:52.867 "rw_ios_per_sec": 0, 00:10:52.867 "rw_mbytes_per_sec": 0, 00:10:52.867 "r_mbytes_per_sec": 0, 00:10:52.867 "w_mbytes_per_sec": 0 00:10:52.867 }, 00:10:52.867 "claimed": false, 00:10:52.867 "zoned": false, 00:10:52.867 "supported_io_types": { 00:10:52.867 "read": true, 00:10:52.867 "write": true, 00:10:52.867 "unmap": true, 00:10:52.867 "flush": true, 00:10:52.867 "reset": true, 00:10:52.867 "nvme_admin": false, 00:10:52.867 "nvme_io": false, 00:10:52.867 "nvme_io_md": false, 00:10:52.867 "write_zeroes": true, 00:10:52.867 "zcopy": false, 00:10:52.867 "get_zone_info": false, 00:10:52.867 "zone_management": false, 00:10:52.867 "zone_append": false, 00:10:52.867 "compare": false, 00:10:52.867 "compare_and_write": false, 00:10:52.867 "abort": false, 00:10:52.867 "seek_hole": false, 00:10:52.867 "seek_data": false, 00:10:52.867 "copy": false, 00:10:52.867 "nvme_iov_md": false 00:10:52.867 }, 00:10:52.867 "memory_domains": [ 00:10:52.867 { 00:10:52.867 "dma_device_id": "system", 00:10:52.867 "dma_device_type": 1 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.867 "dma_device_type": 2 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "system", 00:10:52.867 "dma_device_type": 1 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.867 "dma_device_type": 2 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "system", 00:10:52.867 "dma_device_type": 1 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.867 "dma_device_type": 2 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "system", 00:10:52.867 "dma_device_type": 1 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.867 "dma_device_type": 2 00:10:52.867 } 00:10:52.867 ], 00:10:52.867 "driver_specific": { 00:10:52.867 "raid": { 00:10:52.867 "uuid": "e7600165-8ac8-4d54-99a5-7e8c203d3401", 00:10:52.867 "strip_size_kb": 64, 00:10:52.867 "state": "online", 00:10:52.867 "raid_level": "raid0", 00:10:52.867 "superblock": true, 00:10:52.867 "num_base_bdevs": 4, 00:10:52.867 "num_base_bdevs_discovered": 4, 00:10:52.867 "num_base_bdevs_operational": 4, 00:10:52.867 "base_bdevs_list": [ 00:10:52.867 { 00:10:52.867 "name": "pt1", 00:10:52.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.867 "is_configured": true, 00:10:52.867 "data_offset": 2048, 00:10:52.867 "data_size": 63488 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "name": "pt2", 00:10:52.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.867 "is_configured": true, 00:10:52.867 "data_offset": 2048, 00:10:52.867 "data_size": 63488 00:10:52.867 }, 00:10:52.867 { 00:10:52.867 "name": "pt3", 00:10:52.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.868 "is_configured": true, 00:10:52.868 "data_offset": 2048, 00:10:52.868 "data_size": 63488 00:10:52.868 }, 00:10:52.868 { 00:10:52.868 "name": "pt4", 00:10:52.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.868 "is_configured": true, 00:10:52.868 "data_offset": 2048, 00:10:52.868 "data_size": 63488 00:10:52.868 } 00:10:52.868 ] 00:10:52.868 } 00:10:52.868 } 00:10:52.868 }' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.868 pt2 00:10:52.868 pt3 00:10:52.868 pt4' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.868 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.128 [2024-11-10 15:19:59.351700] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e7600165-8ac8-4d54-99a5-7e8c203d3401 '!=' e7600165-8ac8-4d54-99a5-7e8c203d3401 ']' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83040 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 83040 ']' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 83040 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83040 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83040' 00:10:53.128 killing process with pid 83040 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 83040 00:10:53.128 [2024-11-10 15:19:59.435235] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.128 [2024-11-10 15:19:59.435353] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.128 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 83040 00:10:53.128 [2024-11-10 15:19:59.435444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.128 [2024-11-10 15:19:59.435456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:10:53.128 [2024-11-10 15:19:59.480829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.388 15:19:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:53.388 00:10:53.388 real 0m4.130s 00:10:53.388 user 0m6.512s 00:10:53.388 sys 0m0.912s 00:10:53.388 ************************************ 00:10:53.388 END TEST raid_superblock_test 00:10:53.388 ************************************ 00:10:53.388 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:53.388 15:19:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.649 15:19:59 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:53.649 15:19:59 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:53.649 15:19:59 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.649 15:19:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.649 ************************************ 00:10:53.649 START TEST raid_read_error_test 00:10:53.649 ************************************ 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 read 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AJGxwrDy0R 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83288 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83288 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 83288 ']' 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:53.649 15:19:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.649 [2024-11-10 15:19:59.873912] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:53.649 [2024-11-10 15:19:59.874137] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83288 ] 00:10:53.649 [2024-11-10 15:20:00.007646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:53.909 [2024-11-10 15:20:00.044293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.909 [2024-11-10 15:20:00.071692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.909 [2024-11-10 15:20:00.116495] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.909 [2024-11-10 15:20:00.116590] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.480 BaseBdev1_malloc 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.480 true 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.480 [2024-11-10 15:20:00.748872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.480 [2024-11-10 15:20:00.748978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.480 [2024-11-10 15:20:00.749017] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.480 [2024-11-10 15:20:00.749033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.480 [2024-11-10 15:20:00.751377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.480 [2024-11-10 15:20:00.751421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.480 BaseBdev1 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.480 BaseBdev2_malloc 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.480 true 00:10:54.480 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.481 [2024-11-10 15:20:00.789709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.481 [2024-11-10 15:20:00.789765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.481 [2024-11-10 15:20:00.789781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.481 [2024-11-10 15:20:00.789791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.481 [2024-11-10 15:20:00.792015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.481 [2024-11-10 15:20:00.792063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.481 BaseBdev2 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.481 BaseBdev3_malloc 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.481 true 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.481 [2024-11-10 15:20:00.830528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:54.481 [2024-11-10 15:20:00.830582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.481 [2024-11-10 15:20:00.830599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:54.481 [2024-11-10 15:20:00.830609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.481 [2024-11-10 15:20:00.832784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.481 [2024-11-10 15:20:00.832824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:54.481 BaseBdev3 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.481 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 BaseBdev4_malloc 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 true 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 [2024-11-10 15:20:00.881502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:54.742 [2024-11-10 15:20:00.881562] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.742 [2024-11-10 15:20:00.881580] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:54.742 [2024-11-10 15:20:00.881591] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.742 [2024-11-10 15:20:00.883787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.742 [2024-11-10 15:20:00.883829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:54.742 BaseBdev4 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 [2024-11-10 15:20:00.893547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:54.742 [2024-11-10 15:20:00.895534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.742 [2024-11-10 15:20:00.895610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:54.742 [2024-11-10 15:20:00.895676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:54.742 [2024-11-10 15:20:00.895878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:54.742 [2024-11-10 15:20:00.895892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:54.742 [2024-11-10 15:20:00.896151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:54.742 [2024-11-10 15:20:00.896293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:54.742 [2024-11-10 15:20:00.896309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:54.742 [2024-11-10 15:20:00.896431] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.742 "name": "raid_bdev1", 00:10:54.742 "uuid": "13856460-9ea2-4c56-9af8-9a3bd45e556f", 00:10:54.742 "strip_size_kb": 64, 00:10:54.742 "state": "online", 00:10:54.742 "raid_level": "raid0", 00:10:54.742 "superblock": true, 00:10:54.742 "num_base_bdevs": 4, 00:10:54.742 "num_base_bdevs_discovered": 4, 00:10:54.742 "num_base_bdevs_operational": 4, 00:10:54.742 "base_bdevs_list": [ 00:10:54.742 { 00:10:54.742 "name": "BaseBdev1", 00:10:54.742 "uuid": "f6c8fc15-bf8d-5b7b-87ad-12b983075e51", 00:10:54.742 "is_configured": true, 00:10:54.742 "data_offset": 2048, 00:10:54.742 "data_size": 63488 00:10:54.742 }, 00:10:54.742 { 00:10:54.742 "name": "BaseBdev2", 00:10:54.742 "uuid": "ad018691-6dca-532a-9d15-af36ada444d6", 00:10:54.742 "is_configured": true, 00:10:54.742 "data_offset": 2048, 00:10:54.742 "data_size": 63488 00:10:54.742 }, 00:10:54.742 { 00:10:54.742 "name": "BaseBdev3", 00:10:54.742 "uuid": "ba41f8ce-4224-5b6a-9839-c2fabc082ca4", 00:10:54.742 "is_configured": true, 00:10:54.742 "data_offset": 2048, 00:10:54.742 "data_size": 63488 00:10:54.742 }, 00:10:54.742 { 00:10:54.742 "name": "BaseBdev4", 00:10:54.742 "uuid": "b8a1773f-e563-5f26-b9ca-25a1e82896cd", 00:10:54.742 "is_configured": true, 00:10:54.742 "data_offset": 2048, 00:10:54.742 "data_size": 63488 00:10:54.742 } 00:10:54.742 ] 00:10:54.742 }' 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.742 15:20:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.003 15:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.003 15:20:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.262 [2024-11-10 15:20:01.438115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.201 "name": "raid_bdev1", 00:10:56.201 "uuid": "13856460-9ea2-4c56-9af8-9a3bd45e556f", 00:10:56.201 "strip_size_kb": 64, 00:10:56.201 "state": "online", 00:10:56.201 "raid_level": "raid0", 00:10:56.201 "superblock": true, 00:10:56.201 "num_base_bdevs": 4, 00:10:56.201 "num_base_bdevs_discovered": 4, 00:10:56.201 "num_base_bdevs_operational": 4, 00:10:56.201 "base_bdevs_list": [ 00:10:56.201 { 00:10:56.201 "name": "BaseBdev1", 00:10:56.201 "uuid": "f6c8fc15-bf8d-5b7b-87ad-12b983075e51", 00:10:56.201 "is_configured": true, 00:10:56.201 "data_offset": 2048, 00:10:56.201 "data_size": 63488 00:10:56.201 }, 00:10:56.201 { 00:10:56.201 "name": "BaseBdev2", 00:10:56.201 "uuid": "ad018691-6dca-532a-9d15-af36ada444d6", 00:10:56.201 "is_configured": true, 00:10:56.201 "data_offset": 2048, 00:10:56.201 "data_size": 63488 00:10:56.201 }, 00:10:56.201 { 00:10:56.201 "name": "BaseBdev3", 00:10:56.201 "uuid": "ba41f8ce-4224-5b6a-9839-c2fabc082ca4", 00:10:56.201 "is_configured": true, 00:10:56.201 "data_offset": 2048, 00:10:56.201 "data_size": 63488 00:10:56.201 }, 00:10:56.201 { 00:10:56.201 "name": "BaseBdev4", 00:10:56.201 "uuid": "b8a1773f-e563-5f26-b9ca-25a1e82896cd", 00:10:56.201 "is_configured": true, 00:10:56.201 "data_offset": 2048, 00:10:56.201 "data_size": 63488 00:10:56.201 } 00:10:56.201 ] 00:10:56.201 }' 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.201 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.462 [2024-11-10 15:20:02.764510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.462 [2024-11-10 15:20:02.764609] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.462 [2024-11-10 15:20:02.767176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.462 [2024-11-10 15:20:02.767285] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.462 [2024-11-10 15:20:02.767349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.462 [2024-11-10 15:20:02.767412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:56.462 { 00:10:56.462 "results": [ 00:10:56.462 { 00:10:56.462 "job": "raid_bdev1", 00:10:56.462 "core_mask": "0x1", 00:10:56.462 "workload": "randrw", 00:10:56.462 "percentage": 50, 00:10:56.462 "status": "finished", 00:10:56.462 "queue_depth": 1, 00:10:56.462 "io_size": 131072, 00:10:56.462 "runtime": 1.324385, 00:10:56.462 "iops": 16225.64435568207, 00:10:56.462 "mibps": 2028.2055444602588, 00:10:56.462 "io_failed": 1, 00:10:56.462 "io_timeout": 0, 00:10:56.462 "avg_latency_us": 85.57112855695308, 00:10:56.462 "min_latency_us": 25.21398065022226, 00:10:56.462 "max_latency_us": 1435.188703913536 00:10:56.462 } 00:10:56.462 ], 00:10:56.462 "core_count": 1 00:10:56.462 } 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83288 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 83288 ']' 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 83288 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83288 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83288' 00:10:56.462 killing process with pid 83288 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 83288 00:10:56.462 [2024-11-10 15:20:02.816449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.462 15:20:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 83288 00:10:56.722 [2024-11-10 15:20:02.852961] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AJGxwrDy0R 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:56.722 ************************************ 00:10:56.722 END TEST raid_read_error_test 00:10:56.722 ************************************ 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:10:56.722 00:10:56.722 real 0m3.300s 00:10:56.722 user 0m4.127s 00:10:56.722 sys 0m0.573s 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:56.722 15:20:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.981 15:20:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:56.981 15:20:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:10:56.981 15:20:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:56.981 15:20:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.981 ************************************ 00:10:56.981 START TEST raid_write_error_test 00:10:56.981 ************************************ 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid0 4 write 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.981 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tZSiYmKJje 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83417 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83417 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 83417 ']' 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:56.982 15:20:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.982 [2024-11-10 15:20:03.243489] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:10:56.982 [2024-11-10 15:20:03.243603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83417 ] 00:10:57.242 [2024-11-10 15:20:03.376677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:57.242 [2024-11-10 15:20:03.412754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.242 [2024-11-10 15:20:03.437606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.242 [2024-11-10 15:20:03.480078] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.242 [2024-11-10 15:20:03.480117] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 BaseBdev1_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 true 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 [2024-11-10 15:20:04.115670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:57.812 [2024-11-10 15:20:04.115774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.812 [2024-11-10 15:20:04.115812] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:57.812 [2024-11-10 15:20:04.115843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.812 [2024-11-10 15:20:04.117990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.812 [2024-11-10 15:20:04.118074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.812 BaseBdev1 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 BaseBdev2_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 true 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.812 [2024-11-10 15:20:04.156202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.812 [2024-11-10 15:20:04.156254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.812 [2024-11-10 15:20:04.156271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.812 [2024-11-10 15:20:04.156282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.812 [2024-11-10 15:20:04.158355] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.812 [2024-11-10 15:20:04.158390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.812 BaseBdev2 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.812 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.076 BaseBdev3_malloc 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.076 true 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.076 [2024-11-10 15:20:04.196814] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:58.076 [2024-11-10 15:20:04.196865] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.076 [2024-11-10 15:20:04.196898] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:58.076 [2024-11-10 15:20:04.196909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.076 [2024-11-10 15:20:04.198962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.076 [2024-11-10 15:20:04.199000] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:58.076 BaseBdev3 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:58.076 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 BaseBdev4_malloc 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 true 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 [2024-11-10 15:20:04.251700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:58.077 [2024-11-10 15:20:04.251809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.077 [2024-11-10 15:20:04.251833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:58.077 [2024-11-10 15:20:04.251846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.077 [2024-11-10 15:20:04.253949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.077 [2024-11-10 15:20:04.253992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:58.077 BaseBdev4 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 [2024-11-10 15:20:04.263745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.077 [2024-11-10 15:20:04.265817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.077 [2024-11-10 15:20:04.265885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.077 [2024-11-10 15:20:04.265936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:58.077 [2024-11-10 15:20:04.266142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:58.077 [2024-11-10 15:20:04.266159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.077 [2024-11-10 15:20:04.266417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:10:58.077 [2024-11-10 15:20:04.266556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:58.077 [2024-11-10 15:20:04.266566] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:58.077 [2024-11-10 15:20:04.266717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.077 "name": "raid_bdev1", 00:10:58.077 "uuid": "f6e0cd85-82c5-4657-b94c-9b9feb298331", 00:10:58.077 "strip_size_kb": 64, 00:10:58.077 "state": "online", 00:10:58.077 "raid_level": "raid0", 00:10:58.077 "superblock": true, 00:10:58.077 "num_base_bdevs": 4, 00:10:58.077 "num_base_bdevs_discovered": 4, 00:10:58.077 "num_base_bdevs_operational": 4, 00:10:58.077 "base_bdevs_list": [ 00:10:58.077 { 00:10:58.077 "name": "BaseBdev1", 00:10:58.077 "uuid": "857a1066-8090-56f5-94cc-088a3ebc60c0", 00:10:58.077 "is_configured": true, 00:10:58.077 "data_offset": 2048, 00:10:58.077 "data_size": 63488 00:10:58.077 }, 00:10:58.077 { 00:10:58.077 "name": "BaseBdev2", 00:10:58.077 "uuid": "de93851c-77ef-58dd-b240-175b28671528", 00:10:58.077 "is_configured": true, 00:10:58.077 "data_offset": 2048, 00:10:58.077 "data_size": 63488 00:10:58.077 }, 00:10:58.077 { 00:10:58.077 "name": "BaseBdev3", 00:10:58.077 "uuid": "06df3e5c-4a5b-5595-b99a-b9b83433b1ad", 00:10:58.077 "is_configured": true, 00:10:58.077 "data_offset": 2048, 00:10:58.077 "data_size": 63488 00:10:58.077 }, 00:10:58.077 { 00:10:58.077 "name": "BaseBdev4", 00:10:58.077 "uuid": "887bb087-c1ad-53a4-b64d-9d8dc566906f", 00:10:58.077 "is_configured": true, 00:10:58.077 "data_offset": 2048, 00:10:58.077 "data_size": 63488 00:10:58.077 } 00:10:58.077 ] 00:10:58.077 }' 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.077 15:20:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.656 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:58.656 15:20:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:58.656 [2024-11-10 15:20:04.820292] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.595 "name": "raid_bdev1", 00:10:59.595 "uuid": "f6e0cd85-82c5-4657-b94c-9b9feb298331", 00:10:59.595 "strip_size_kb": 64, 00:10:59.595 "state": "online", 00:10:59.595 "raid_level": "raid0", 00:10:59.595 "superblock": true, 00:10:59.595 "num_base_bdevs": 4, 00:10:59.595 "num_base_bdevs_discovered": 4, 00:10:59.595 "num_base_bdevs_operational": 4, 00:10:59.595 "base_bdevs_list": [ 00:10:59.595 { 00:10:59.595 "name": "BaseBdev1", 00:10:59.595 "uuid": "857a1066-8090-56f5-94cc-088a3ebc60c0", 00:10:59.595 "is_configured": true, 00:10:59.595 "data_offset": 2048, 00:10:59.595 "data_size": 63488 00:10:59.595 }, 00:10:59.595 { 00:10:59.595 "name": "BaseBdev2", 00:10:59.595 "uuid": "de93851c-77ef-58dd-b240-175b28671528", 00:10:59.595 "is_configured": true, 00:10:59.595 "data_offset": 2048, 00:10:59.595 "data_size": 63488 00:10:59.595 }, 00:10:59.595 { 00:10:59.595 "name": "BaseBdev3", 00:10:59.595 "uuid": "06df3e5c-4a5b-5595-b99a-b9b83433b1ad", 00:10:59.595 "is_configured": true, 00:10:59.595 "data_offset": 2048, 00:10:59.595 "data_size": 63488 00:10:59.595 }, 00:10:59.595 { 00:10:59.595 "name": "BaseBdev4", 00:10:59.595 "uuid": "887bb087-c1ad-53a4-b64d-9d8dc566906f", 00:10:59.595 "is_configured": true, 00:10:59.595 "data_offset": 2048, 00:10:59.595 "data_size": 63488 00:10:59.595 } 00:10:59.595 ] 00:10:59.595 }' 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.595 15:20:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.855 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.856 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.856 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.856 [2024-11-10 15:20:06.215170] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.856 [2024-11-10 15:20:06.215276] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:00.115 [2024-11-10 15:20:06.218357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.115 [2024-11-10 15:20:06.218475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.115 [2024-11-10 15:20:06.218544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.115 [2024-11-10 15:20:06.218607] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:00.115 { 00:11:00.115 "results": [ 00:11:00.115 { 00:11:00.115 "job": "raid_bdev1", 00:11:00.115 "core_mask": "0x1", 00:11:00.115 "workload": "randrw", 00:11:00.115 "percentage": 50, 00:11:00.115 "status": "finished", 00:11:00.115 "queue_depth": 1, 00:11:00.115 "io_size": 131072, 00:11:00.115 "runtime": 1.393022, 00:11:00.115 "iops": 16348.629095592172, 00:11:00.115 "mibps": 2043.5786369490215, 00:11:00.115 "io_failed": 1, 00:11:00.115 "io_timeout": 0, 00:11:00.115 "avg_latency_us": 84.74115108033436, 00:11:00.115 "min_latency_us": 25.548679508411052, 00:11:00.115 "max_latency_us": 1349.5057962172057 00:11:00.115 } 00:11:00.115 ], 00:11:00.115 "core_count": 1 00:11:00.115 } 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83417 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 83417 ']' 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 83417 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83417 00:11:00.115 killing process with pid 83417 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83417' 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 83417 00:11:00.115 [2024-11-10 15:20:06.267870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.115 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 83417 00:11:00.115 [2024-11-10 15:20:06.304670] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tZSiYmKJje 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:00.374 00:11:00.374 real 0m3.388s 00:11:00.374 user 0m4.288s 00:11:00.374 sys 0m0.562s 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:00.374 15:20:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.374 ************************************ 00:11:00.374 END TEST raid_write_error_test 00:11:00.374 ************************************ 00:11:00.374 15:20:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:00.374 15:20:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:00.374 15:20:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:00.374 15:20:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.374 15:20:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.374 ************************************ 00:11:00.374 START TEST raid_state_function_test 00:11:00.374 ************************************ 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 false 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83555 00:11:00.374 Process raid pid: 83555 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83555' 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83555 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 83555 ']' 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:00.374 15:20:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.374 [2024-11-10 15:20:06.725731] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:00.374 [2024-11-10 15:20:06.726502] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.633 [2024-11-10 15:20:06.886156] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:00.633 [2024-11-10 15:20:06.924656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.633 [2024-11-10 15:20:06.950181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.633 [2024-11-10 15:20:06.992876] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.633 [2024-11-10 15:20:06.992991] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.569 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.569 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:01.569 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.569 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.569 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.569 [2024-11-10 15:20:07.571918] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.569 [2024-11-10 15:20:07.572026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.569 [2024-11-10 15:20:07.572062] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.570 [2024-11-10 15:20:07.572085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.570 [2024-11-10 15:20:07.572108] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.570 [2024-11-10 15:20:07.572128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.570 [2024-11-10 15:20:07.572148] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.570 [2024-11-10 15:20:07.572200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.570 "name": "Existed_Raid", 00:11:01.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.570 "strip_size_kb": 64, 00:11:01.570 "state": "configuring", 00:11:01.570 "raid_level": "concat", 00:11:01.570 "superblock": false, 00:11:01.570 "num_base_bdevs": 4, 00:11:01.570 "num_base_bdevs_discovered": 0, 00:11:01.570 "num_base_bdevs_operational": 4, 00:11:01.570 "base_bdevs_list": [ 00:11:01.570 { 00:11:01.570 "name": "BaseBdev1", 00:11:01.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.570 "is_configured": false, 00:11:01.570 "data_offset": 0, 00:11:01.570 "data_size": 0 00:11:01.570 }, 00:11:01.570 { 00:11:01.570 "name": "BaseBdev2", 00:11:01.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.570 "is_configured": false, 00:11:01.570 "data_offset": 0, 00:11:01.570 "data_size": 0 00:11:01.570 }, 00:11:01.570 { 00:11:01.570 "name": "BaseBdev3", 00:11:01.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.570 "is_configured": false, 00:11:01.570 "data_offset": 0, 00:11:01.570 "data_size": 0 00:11:01.570 }, 00:11:01.570 { 00:11:01.570 "name": "BaseBdev4", 00:11:01.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.570 "is_configured": false, 00:11:01.570 "data_offset": 0, 00:11:01.570 "data_size": 0 00:11:01.570 } 00:11:01.570 ] 00:11:01.570 }' 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.570 15:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-11-10 15:20:08.019955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.830 [2024-11-10 15:20:08.020061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-11-10 15:20:08.031984] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.830 [2024-11-10 15:20:08.032044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.830 [2024-11-10 15:20:08.032058] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.830 [2024-11-10 15:20:08.032067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.830 [2024-11-10 15:20:08.032075] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.830 [2024-11-10 15:20:08.032083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.830 [2024-11-10 15:20:08.032092] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.830 [2024-11-10 15:20:08.032099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-11-10 15:20:08.052809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.830 BaseBdev1 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:01.830 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.831 [ 00:11:01.831 { 00:11:01.831 "name": "BaseBdev1", 00:11:01.831 "aliases": [ 00:11:01.831 "41e71f22-9997-44a7-b47e-8888ba62733c" 00:11:01.831 ], 00:11:01.831 "product_name": "Malloc disk", 00:11:01.831 "block_size": 512, 00:11:01.831 "num_blocks": 65536, 00:11:01.831 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:01.831 "assigned_rate_limits": { 00:11:01.831 "rw_ios_per_sec": 0, 00:11:01.831 "rw_mbytes_per_sec": 0, 00:11:01.831 "r_mbytes_per_sec": 0, 00:11:01.831 "w_mbytes_per_sec": 0 00:11:01.831 }, 00:11:01.831 "claimed": true, 00:11:01.831 "claim_type": "exclusive_write", 00:11:01.831 "zoned": false, 00:11:01.831 "supported_io_types": { 00:11:01.831 "read": true, 00:11:01.831 "write": true, 00:11:01.831 "unmap": true, 00:11:01.831 "flush": true, 00:11:01.831 "reset": true, 00:11:01.831 "nvme_admin": false, 00:11:01.831 "nvme_io": false, 00:11:01.831 "nvme_io_md": false, 00:11:01.831 "write_zeroes": true, 00:11:01.831 "zcopy": true, 00:11:01.831 "get_zone_info": false, 00:11:01.831 "zone_management": false, 00:11:01.831 "zone_append": false, 00:11:01.831 "compare": false, 00:11:01.831 "compare_and_write": false, 00:11:01.831 "abort": true, 00:11:01.831 "seek_hole": false, 00:11:01.831 "seek_data": false, 00:11:01.831 "copy": true, 00:11:01.831 "nvme_iov_md": false 00:11:01.831 }, 00:11:01.831 "memory_domains": [ 00:11:01.831 { 00:11:01.831 "dma_device_id": "system", 00:11:01.831 "dma_device_type": 1 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.831 "dma_device_type": 2 00:11:01.831 } 00:11:01.831 ], 00:11:01.831 "driver_specific": {} 00:11:01.831 } 00:11:01.831 ] 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.831 "name": "Existed_Raid", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.831 "strip_size_kb": 64, 00:11:01.831 "state": "configuring", 00:11:01.831 "raid_level": "concat", 00:11:01.831 "superblock": false, 00:11:01.831 "num_base_bdevs": 4, 00:11:01.831 "num_base_bdevs_discovered": 1, 00:11:01.831 "num_base_bdevs_operational": 4, 00:11:01.831 "base_bdevs_list": [ 00:11:01.831 { 00:11:01.831 "name": "BaseBdev1", 00:11:01.831 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:01.831 "is_configured": true, 00:11:01.831 "data_offset": 0, 00:11:01.831 "data_size": 65536 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "name": "BaseBdev2", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.831 "is_configured": false, 00:11:01.831 "data_offset": 0, 00:11:01.831 "data_size": 0 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "name": "BaseBdev3", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.831 "is_configured": false, 00:11:01.831 "data_offset": 0, 00:11:01.831 "data_size": 0 00:11:01.831 }, 00:11:01.831 { 00:11:01.831 "name": "BaseBdev4", 00:11:01.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.831 "is_configured": false, 00:11:01.831 "data_offset": 0, 00:11:01.831 "data_size": 0 00:11:01.831 } 00:11:01.831 ] 00:11:01.831 }' 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.831 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.401 [2024-11-10 15:20:08.593055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.401 [2024-11-10 15:20:08.593200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.401 [2024-11-10 15:20:08.601074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.401 [2024-11-10 15:20:08.603386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.401 [2024-11-10 15:20:08.603499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.401 [2024-11-10 15:20:08.603539] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.401 [2024-11-10 15:20:08.603566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.401 [2024-11-10 15:20:08.603618] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.401 [2024-11-10 15:20:08.603644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.401 "name": "Existed_Raid", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "strip_size_kb": 64, 00:11:02.401 "state": "configuring", 00:11:02.401 "raid_level": "concat", 00:11:02.401 "superblock": false, 00:11:02.401 "num_base_bdevs": 4, 00:11:02.401 "num_base_bdevs_discovered": 1, 00:11:02.401 "num_base_bdevs_operational": 4, 00:11:02.401 "base_bdevs_list": [ 00:11:02.401 { 00:11:02.401 "name": "BaseBdev1", 00:11:02.401 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:02.401 "is_configured": true, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 65536 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "name": "BaseBdev2", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "is_configured": false, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 0 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "name": "BaseBdev3", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "is_configured": false, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 0 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "name": "BaseBdev4", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "is_configured": false, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 0 00:11:02.401 } 00:11:02.401 ] 00:11:02.401 }' 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.401 15:20:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 [2024-11-10 15:20:09.044111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.971 BaseBdev2 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 [ 00:11:02.971 { 00:11:02.971 "name": "BaseBdev2", 00:11:02.971 "aliases": [ 00:11:02.971 "4c956945-95d6-4e96-ae5c-9c9e3c1bd463" 00:11:02.971 ], 00:11:02.971 "product_name": "Malloc disk", 00:11:02.971 "block_size": 512, 00:11:02.971 "num_blocks": 65536, 00:11:02.971 "uuid": "4c956945-95d6-4e96-ae5c-9c9e3c1bd463", 00:11:02.971 "assigned_rate_limits": { 00:11:02.971 "rw_ios_per_sec": 0, 00:11:02.971 "rw_mbytes_per_sec": 0, 00:11:02.971 "r_mbytes_per_sec": 0, 00:11:02.971 "w_mbytes_per_sec": 0 00:11:02.971 }, 00:11:02.971 "claimed": true, 00:11:02.971 "claim_type": "exclusive_write", 00:11:02.971 "zoned": false, 00:11:02.971 "supported_io_types": { 00:11:02.971 "read": true, 00:11:02.971 "write": true, 00:11:02.971 "unmap": true, 00:11:02.971 "flush": true, 00:11:02.971 "reset": true, 00:11:02.971 "nvme_admin": false, 00:11:02.971 "nvme_io": false, 00:11:02.971 "nvme_io_md": false, 00:11:02.971 "write_zeroes": true, 00:11:02.971 "zcopy": true, 00:11:02.971 "get_zone_info": false, 00:11:02.971 "zone_management": false, 00:11:02.971 "zone_append": false, 00:11:02.971 "compare": false, 00:11:02.971 "compare_and_write": false, 00:11:02.971 "abort": true, 00:11:02.971 "seek_hole": false, 00:11:02.971 "seek_data": false, 00:11:02.971 "copy": true, 00:11:02.971 "nvme_iov_md": false 00:11:02.971 }, 00:11:02.971 "memory_domains": [ 00:11:02.971 { 00:11:02.971 "dma_device_id": "system", 00:11:02.971 "dma_device_type": 1 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.971 "dma_device_type": 2 00:11:02.971 } 00:11:02.971 ], 00:11:02.971 "driver_specific": {} 00:11:02.971 } 00:11:02.971 ] 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.971 "name": "Existed_Raid", 00:11:02.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.971 "strip_size_kb": 64, 00:11:02.971 "state": "configuring", 00:11:02.971 "raid_level": "concat", 00:11:02.971 "superblock": false, 00:11:02.971 "num_base_bdevs": 4, 00:11:02.971 "num_base_bdevs_discovered": 2, 00:11:02.971 "num_base_bdevs_operational": 4, 00:11:02.971 "base_bdevs_list": [ 00:11:02.971 { 00:11:02.971 "name": "BaseBdev1", 00:11:02.971 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:02.971 "is_configured": true, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 65536 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "name": "BaseBdev2", 00:11:02.971 "uuid": "4c956945-95d6-4e96-ae5c-9c9e3c1bd463", 00:11:02.971 "is_configured": true, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 65536 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "name": "BaseBdev3", 00:11:02.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.971 "is_configured": false, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 0 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "name": "BaseBdev4", 00:11:02.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.971 "is_configured": false, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 0 00:11:02.971 } 00:11:02.971 ] 00:11:02.971 }' 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.971 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 [2024-11-10 15:20:09.520516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.231 BaseBdev3 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.231 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 [ 00:11:03.231 { 00:11:03.231 "name": "BaseBdev3", 00:11:03.231 "aliases": [ 00:11:03.231 "7ae569d5-c83b-4bdf-800c-fc8b94c7670a" 00:11:03.231 ], 00:11:03.231 "product_name": "Malloc disk", 00:11:03.231 "block_size": 512, 00:11:03.231 "num_blocks": 65536, 00:11:03.231 "uuid": "7ae569d5-c83b-4bdf-800c-fc8b94c7670a", 00:11:03.231 "assigned_rate_limits": { 00:11:03.231 "rw_ios_per_sec": 0, 00:11:03.231 "rw_mbytes_per_sec": 0, 00:11:03.231 "r_mbytes_per_sec": 0, 00:11:03.231 "w_mbytes_per_sec": 0 00:11:03.231 }, 00:11:03.231 "claimed": true, 00:11:03.231 "claim_type": "exclusive_write", 00:11:03.231 "zoned": false, 00:11:03.231 "supported_io_types": { 00:11:03.231 "read": true, 00:11:03.231 "write": true, 00:11:03.231 "unmap": true, 00:11:03.231 "flush": true, 00:11:03.231 "reset": true, 00:11:03.231 "nvme_admin": false, 00:11:03.231 "nvme_io": false, 00:11:03.231 "nvme_io_md": false, 00:11:03.231 "write_zeroes": true, 00:11:03.231 "zcopy": true, 00:11:03.231 "get_zone_info": false, 00:11:03.231 "zone_management": false, 00:11:03.231 "zone_append": false, 00:11:03.231 "compare": false, 00:11:03.231 "compare_and_write": false, 00:11:03.231 "abort": true, 00:11:03.231 "seek_hole": false, 00:11:03.231 "seek_data": false, 00:11:03.231 "copy": true, 00:11:03.231 "nvme_iov_md": false 00:11:03.231 }, 00:11:03.231 "memory_domains": [ 00:11:03.231 { 00:11:03.231 "dma_device_id": "system", 00:11:03.231 "dma_device_type": 1 00:11:03.231 }, 00:11:03.231 { 00:11:03.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.231 "dma_device_type": 2 00:11:03.231 } 00:11:03.231 ], 00:11:03.231 "driver_specific": {} 00:11:03.232 } 00:11:03.232 ] 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.232 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.490 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.490 "name": "Existed_Raid", 00:11:03.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.490 "strip_size_kb": 64, 00:11:03.490 "state": "configuring", 00:11:03.490 "raid_level": "concat", 00:11:03.490 "superblock": false, 00:11:03.490 "num_base_bdevs": 4, 00:11:03.490 "num_base_bdevs_discovered": 3, 00:11:03.490 "num_base_bdevs_operational": 4, 00:11:03.490 "base_bdevs_list": [ 00:11:03.490 { 00:11:03.490 "name": "BaseBdev1", 00:11:03.490 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:03.490 "is_configured": true, 00:11:03.490 "data_offset": 0, 00:11:03.490 "data_size": 65536 00:11:03.490 }, 00:11:03.490 { 00:11:03.490 "name": "BaseBdev2", 00:11:03.490 "uuid": "4c956945-95d6-4e96-ae5c-9c9e3c1bd463", 00:11:03.490 "is_configured": true, 00:11:03.490 "data_offset": 0, 00:11:03.490 "data_size": 65536 00:11:03.490 }, 00:11:03.490 { 00:11:03.490 "name": "BaseBdev3", 00:11:03.490 "uuid": "7ae569d5-c83b-4bdf-800c-fc8b94c7670a", 00:11:03.490 "is_configured": true, 00:11:03.490 "data_offset": 0, 00:11:03.490 "data_size": 65536 00:11:03.490 }, 00:11:03.490 { 00:11:03.490 "name": "BaseBdev4", 00:11:03.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.490 "is_configured": false, 00:11:03.490 "data_offset": 0, 00:11:03.490 "data_size": 0 00:11:03.490 } 00:11:03.490 ] 00:11:03.490 }' 00:11:03.490 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.490 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.749 [2024-11-10 15:20:09.987714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.749 [2024-11-10 15:20:09.987841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:03.749 [2024-11-10 15:20:09.987871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:03.749 [2024-11-10 15:20:09.988214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:03.749 [2024-11-10 15:20:09.988389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:03.749 [2024-11-10 15:20:09.988432] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:03.749 [2024-11-10 15:20:09.988689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.749 BaseBdev4 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.749 15:20:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.749 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.749 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:03.749 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.749 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.749 [ 00:11:03.749 { 00:11:03.749 "name": "BaseBdev4", 00:11:03.749 "aliases": [ 00:11:03.749 "28025e44-c35e-4084-adc5-20d528c69458" 00:11:03.749 ], 00:11:03.749 "product_name": "Malloc disk", 00:11:03.749 "block_size": 512, 00:11:03.749 "num_blocks": 65536, 00:11:03.749 "uuid": "28025e44-c35e-4084-adc5-20d528c69458", 00:11:03.749 "assigned_rate_limits": { 00:11:03.749 "rw_ios_per_sec": 0, 00:11:03.749 "rw_mbytes_per_sec": 0, 00:11:03.749 "r_mbytes_per_sec": 0, 00:11:03.749 "w_mbytes_per_sec": 0 00:11:03.749 }, 00:11:03.749 "claimed": true, 00:11:03.749 "claim_type": "exclusive_write", 00:11:03.749 "zoned": false, 00:11:03.749 "supported_io_types": { 00:11:03.749 "read": true, 00:11:03.749 "write": true, 00:11:03.749 "unmap": true, 00:11:03.749 "flush": true, 00:11:03.749 "reset": true, 00:11:03.749 "nvme_admin": false, 00:11:03.749 "nvme_io": false, 00:11:03.749 "nvme_io_md": false, 00:11:03.749 "write_zeroes": true, 00:11:03.749 "zcopy": true, 00:11:03.749 "get_zone_info": false, 00:11:03.749 "zone_management": false, 00:11:03.749 "zone_append": false, 00:11:03.749 "compare": false, 00:11:03.749 "compare_and_write": false, 00:11:03.749 "abort": true, 00:11:03.749 "seek_hole": false, 00:11:03.749 "seek_data": false, 00:11:03.749 "copy": true, 00:11:03.749 "nvme_iov_md": false 00:11:03.749 }, 00:11:03.749 "memory_domains": [ 00:11:03.749 { 00:11:03.749 "dma_device_id": "system", 00:11:03.749 "dma_device_type": 1 00:11:03.749 }, 00:11:03.749 { 00:11:03.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.750 "dma_device_type": 2 00:11:03.750 } 00:11:03.750 ], 00:11:03.750 "driver_specific": {} 00:11:03.750 } 00:11:03.750 ] 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.750 "name": "Existed_Raid", 00:11:03.750 "uuid": "9e31758b-5a9a-4907-97cb-710b20ac5933", 00:11:03.750 "strip_size_kb": 64, 00:11:03.750 "state": "online", 00:11:03.750 "raid_level": "concat", 00:11:03.750 "superblock": false, 00:11:03.750 "num_base_bdevs": 4, 00:11:03.750 "num_base_bdevs_discovered": 4, 00:11:03.750 "num_base_bdevs_operational": 4, 00:11:03.750 "base_bdevs_list": [ 00:11:03.750 { 00:11:03.750 "name": "BaseBdev1", 00:11:03.750 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:03.750 "is_configured": true, 00:11:03.750 "data_offset": 0, 00:11:03.750 "data_size": 65536 00:11:03.750 }, 00:11:03.750 { 00:11:03.750 "name": "BaseBdev2", 00:11:03.750 "uuid": "4c956945-95d6-4e96-ae5c-9c9e3c1bd463", 00:11:03.750 "is_configured": true, 00:11:03.750 "data_offset": 0, 00:11:03.750 "data_size": 65536 00:11:03.750 }, 00:11:03.750 { 00:11:03.750 "name": "BaseBdev3", 00:11:03.750 "uuid": "7ae569d5-c83b-4bdf-800c-fc8b94c7670a", 00:11:03.750 "is_configured": true, 00:11:03.750 "data_offset": 0, 00:11:03.750 "data_size": 65536 00:11:03.750 }, 00:11:03.750 { 00:11:03.750 "name": "BaseBdev4", 00:11:03.750 "uuid": "28025e44-c35e-4084-adc5-20d528c69458", 00:11:03.750 "is_configured": true, 00:11:03.750 "data_offset": 0, 00:11:03.750 "data_size": 65536 00:11:03.750 } 00:11:03.750 ] 00:11:03.750 }' 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.750 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.320 [2024-11-10 15:20:10.500252] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.320 "name": "Existed_Raid", 00:11:04.320 "aliases": [ 00:11:04.320 "9e31758b-5a9a-4907-97cb-710b20ac5933" 00:11:04.320 ], 00:11:04.320 "product_name": "Raid Volume", 00:11:04.320 "block_size": 512, 00:11:04.320 "num_blocks": 262144, 00:11:04.320 "uuid": "9e31758b-5a9a-4907-97cb-710b20ac5933", 00:11:04.320 "assigned_rate_limits": { 00:11:04.320 "rw_ios_per_sec": 0, 00:11:04.320 "rw_mbytes_per_sec": 0, 00:11:04.320 "r_mbytes_per_sec": 0, 00:11:04.320 "w_mbytes_per_sec": 0 00:11:04.320 }, 00:11:04.320 "claimed": false, 00:11:04.320 "zoned": false, 00:11:04.320 "supported_io_types": { 00:11:04.320 "read": true, 00:11:04.320 "write": true, 00:11:04.320 "unmap": true, 00:11:04.320 "flush": true, 00:11:04.320 "reset": true, 00:11:04.320 "nvme_admin": false, 00:11:04.320 "nvme_io": false, 00:11:04.320 "nvme_io_md": false, 00:11:04.320 "write_zeroes": true, 00:11:04.320 "zcopy": false, 00:11:04.320 "get_zone_info": false, 00:11:04.320 "zone_management": false, 00:11:04.320 "zone_append": false, 00:11:04.320 "compare": false, 00:11:04.320 "compare_and_write": false, 00:11:04.320 "abort": false, 00:11:04.320 "seek_hole": false, 00:11:04.320 "seek_data": false, 00:11:04.320 "copy": false, 00:11:04.320 "nvme_iov_md": false 00:11:04.320 }, 00:11:04.320 "memory_domains": [ 00:11:04.320 { 00:11:04.320 "dma_device_id": "system", 00:11:04.320 "dma_device_type": 1 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.320 "dma_device_type": 2 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "system", 00:11:04.320 "dma_device_type": 1 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.320 "dma_device_type": 2 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "system", 00:11:04.320 "dma_device_type": 1 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.320 "dma_device_type": 2 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "system", 00:11:04.320 "dma_device_type": 1 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.320 "dma_device_type": 2 00:11:04.320 } 00:11:04.320 ], 00:11:04.320 "driver_specific": { 00:11:04.320 "raid": { 00:11:04.320 "uuid": "9e31758b-5a9a-4907-97cb-710b20ac5933", 00:11:04.320 "strip_size_kb": 64, 00:11:04.320 "state": "online", 00:11:04.320 "raid_level": "concat", 00:11:04.320 "superblock": false, 00:11:04.320 "num_base_bdevs": 4, 00:11:04.320 "num_base_bdevs_discovered": 4, 00:11:04.320 "num_base_bdevs_operational": 4, 00:11:04.320 "base_bdevs_list": [ 00:11:04.320 { 00:11:04.320 "name": "BaseBdev1", 00:11:04.320 "uuid": "41e71f22-9997-44a7-b47e-8888ba62733c", 00:11:04.320 "is_configured": true, 00:11:04.320 "data_offset": 0, 00:11:04.320 "data_size": 65536 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "name": "BaseBdev2", 00:11:04.320 "uuid": "4c956945-95d6-4e96-ae5c-9c9e3c1bd463", 00:11:04.320 "is_configured": true, 00:11:04.320 "data_offset": 0, 00:11:04.320 "data_size": 65536 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "name": "BaseBdev3", 00:11:04.320 "uuid": "7ae569d5-c83b-4bdf-800c-fc8b94c7670a", 00:11:04.320 "is_configured": true, 00:11:04.320 "data_offset": 0, 00:11:04.320 "data_size": 65536 00:11:04.320 }, 00:11:04.320 { 00:11:04.320 "name": "BaseBdev4", 00:11:04.320 "uuid": "28025e44-c35e-4084-adc5-20d528c69458", 00:11:04.320 "is_configured": true, 00:11:04.320 "data_offset": 0, 00:11:04.320 "data_size": 65536 00:11:04.320 } 00:11:04.320 ] 00:11:04.320 } 00:11:04.320 } 00:11:04.320 }' 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.320 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:04.320 BaseBdev2 00:11:04.320 BaseBdev3 00:11:04.321 BaseBdev4' 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.321 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.580 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.581 [2024-11-10 15:20:10.820071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.581 [2024-11-10 15:20:10.820098] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.581 [2024-11-10 15:20:10.820159] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.581 "name": "Existed_Raid", 00:11:04.581 "uuid": "9e31758b-5a9a-4907-97cb-710b20ac5933", 00:11:04.581 "strip_size_kb": 64, 00:11:04.581 "state": "offline", 00:11:04.581 "raid_level": "concat", 00:11:04.581 "superblock": false, 00:11:04.581 "num_base_bdevs": 4, 00:11:04.581 "num_base_bdevs_discovered": 3, 00:11:04.581 "num_base_bdevs_operational": 3, 00:11:04.581 "base_bdevs_list": [ 00:11:04.581 { 00:11:04.581 "name": null, 00:11:04.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.581 "is_configured": false, 00:11:04.581 "data_offset": 0, 00:11:04.581 "data_size": 65536 00:11:04.581 }, 00:11:04.581 { 00:11:04.581 "name": "BaseBdev2", 00:11:04.581 "uuid": "4c956945-95d6-4e96-ae5c-9c9e3c1bd463", 00:11:04.581 "is_configured": true, 00:11:04.581 "data_offset": 0, 00:11:04.581 "data_size": 65536 00:11:04.581 }, 00:11:04.581 { 00:11:04.581 "name": "BaseBdev3", 00:11:04.581 "uuid": "7ae569d5-c83b-4bdf-800c-fc8b94c7670a", 00:11:04.581 "is_configured": true, 00:11:04.581 "data_offset": 0, 00:11:04.581 "data_size": 65536 00:11:04.581 }, 00:11:04.581 { 00:11:04.581 "name": "BaseBdev4", 00:11:04.581 "uuid": "28025e44-c35e-4084-adc5-20d528c69458", 00:11:04.581 "is_configured": true, 00:11:04.581 "data_offset": 0, 00:11:04.581 "data_size": 65536 00:11:04.581 } 00:11:04.581 ] 00:11:04.581 }' 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.581 15:20:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.148 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:05.148 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.148 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.148 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.148 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 [2024-11-10 15:20:11.267670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 [2024-11-10 15:20:11.338681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 [2024-11-10 15:20:11.405989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:05.149 [2024-11-10 15:20:11.406057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 BaseBdev2 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.149 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.409 [ 00:11:05.409 { 00:11:05.409 "name": "BaseBdev2", 00:11:05.409 "aliases": [ 00:11:05.409 "4e2a8853-1c11-474d-8ff0-a00300721e88" 00:11:05.409 ], 00:11:05.409 "product_name": "Malloc disk", 00:11:05.409 "block_size": 512, 00:11:05.409 "num_blocks": 65536, 00:11:05.409 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:05.409 "assigned_rate_limits": { 00:11:05.409 "rw_ios_per_sec": 0, 00:11:05.409 "rw_mbytes_per_sec": 0, 00:11:05.409 "r_mbytes_per_sec": 0, 00:11:05.409 "w_mbytes_per_sec": 0 00:11:05.409 }, 00:11:05.409 "claimed": false, 00:11:05.409 "zoned": false, 00:11:05.409 "supported_io_types": { 00:11:05.409 "read": true, 00:11:05.409 "write": true, 00:11:05.409 "unmap": true, 00:11:05.409 "flush": true, 00:11:05.409 "reset": true, 00:11:05.409 "nvme_admin": false, 00:11:05.409 "nvme_io": false, 00:11:05.409 "nvme_io_md": false, 00:11:05.409 "write_zeroes": true, 00:11:05.409 "zcopy": true, 00:11:05.409 "get_zone_info": false, 00:11:05.409 "zone_management": false, 00:11:05.409 "zone_append": false, 00:11:05.409 "compare": false, 00:11:05.409 "compare_and_write": false, 00:11:05.409 "abort": true, 00:11:05.409 "seek_hole": false, 00:11:05.409 "seek_data": false, 00:11:05.409 "copy": true, 00:11:05.409 "nvme_iov_md": false 00:11:05.409 }, 00:11:05.409 "memory_domains": [ 00:11:05.409 { 00:11:05.409 "dma_device_id": "system", 00:11:05.409 "dma_device_type": 1 00:11:05.409 }, 00:11:05.409 { 00:11:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.409 "dma_device_type": 2 00:11:05.409 } 00:11:05.409 ], 00:11:05.409 "driver_specific": {} 00:11:05.409 } 00:11:05.409 ] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.409 BaseBdev3 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.409 [ 00:11:05.409 { 00:11:05.409 "name": "BaseBdev3", 00:11:05.409 "aliases": [ 00:11:05.409 "ce31f09d-73b8-47e4-8a7e-5a317140621b" 00:11:05.409 ], 00:11:05.409 "product_name": "Malloc disk", 00:11:05.409 "block_size": 512, 00:11:05.409 "num_blocks": 65536, 00:11:05.409 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:05.409 "assigned_rate_limits": { 00:11:05.409 "rw_ios_per_sec": 0, 00:11:05.409 "rw_mbytes_per_sec": 0, 00:11:05.409 "r_mbytes_per_sec": 0, 00:11:05.409 "w_mbytes_per_sec": 0 00:11:05.409 }, 00:11:05.409 "claimed": false, 00:11:05.409 "zoned": false, 00:11:05.409 "supported_io_types": { 00:11:05.409 "read": true, 00:11:05.409 "write": true, 00:11:05.409 "unmap": true, 00:11:05.409 "flush": true, 00:11:05.409 "reset": true, 00:11:05.409 "nvme_admin": false, 00:11:05.409 "nvme_io": false, 00:11:05.409 "nvme_io_md": false, 00:11:05.409 "write_zeroes": true, 00:11:05.409 "zcopy": true, 00:11:05.409 "get_zone_info": false, 00:11:05.409 "zone_management": false, 00:11:05.409 "zone_append": false, 00:11:05.409 "compare": false, 00:11:05.409 "compare_and_write": false, 00:11:05.409 "abort": true, 00:11:05.409 "seek_hole": false, 00:11:05.409 "seek_data": false, 00:11:05.409 "copy": true, 00:11:05.409 "nvme_iov_md": false 00:11:05.409 }, 00:11:05.409 "memory_domains": [ 00:11:05.409 { 00:11:05.409 "dma_device_id": "system", 00:11:05.409 "dma_device_type": 1 00:11:05.409 }, 00:11:05.409 { 00:11:05.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.409 "dma_device_type": 2 00:11:05.409 } 00:11:05.409 ], 00:11:05.409 "driver_specific": {} 00:11:05.409 } 00:11:05.409 ] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.409 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 BaseBdev4 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 [ 00:11:05.410 { 00:11:05.410 "name": "BaseBdev4", 00:11:05.410 "aliases": [ 00:11:05.410 "2b4d20b0-7c8e-49b3-84a2-3545384f13bf" 00:11:05.410 ], 00:11:05.410 "product_name": "Malloc disk", 00:11:05.410 "block_size": 512, 00:11:05.410 "num_blocks": 65536, 00:11:05.410 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:05.410 "assigned_rate_limits": { 00:11:05.410 "rw_ios_per_sec": 0, 00:11:05.410 "rw_mbytes_per_sec": 0, 00:11:05.410 "r_mbytes_per_sec": 0, 00:11:05.410 "w_mbytes_per_sec": 0 00:11:05.410 }, 00:11:05.410 "claimed": false, 00:11:05.410 "zoned": false, 00:11:05.410 "supported_io_types": { 00:11:05.410 "read": true, 00:11:05.410 "write": true, 00:11:05.410 "unmap": true, 00:11:05.410 "flush": true, 00:11:05.410 "reset": true, 00:11:05.410 "nvme_admin": false, 00:11:05.410 "nvme_io": false, 00:11:05.410 "nvme_io_md": false, 00:11:05.410 "write_zeroes": true, 00:11:05.410 "zcopy": true, 00:11:05.410 "get_zone_info": false, 00:11:05.410 "zone_management": false, 00:11:05.410 "zone_append": false, 00:11:05.410 "compare": false, 00:11:05.410 "compare_and_write": false, 00:11:05.410 "abort": true, 00:11:05.410 "seek_hole": false, 00:11:05.410 "seek_data": false, 00:11:05.410 "copy": true, 00:11:05.410 "nvme_iov_md": false 00:11:05.410 }, 00:11:05.410 "memory_domains": [ 00:11:05.410 { 00:11:05.410 "dma_device_id": "system", 00:11:05.410 "dma_device_type": 1 00:11:05.410 }, 00:11:05.410 { 00:11:05.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.410 "dma_device_type": 2 00:11:05.410 } 00:11:05.410 ], 00:11:05.410 "driver_specific": {} 00:11:05.410 } 00:11:05.410 ] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 [2024-11-10 15:20:11.635237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.410 [2024-11-10 15:20:11.635320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.410 [2024-11-10 15:20:11.635360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.410 [2024-11-10 15:20:11.637173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.410 [2024-11-10 15:20:11.637258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.410 "name": "Existed_Raid", 00:11:05.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.410 "strip_size_kb": 64, 00:11:05.410 "state": "configuring", 00:11:05.410 "raid_level": "concat", 00:11:05.410 "superblock": false, 00:11:05.410 "num_base_bdevs": 4, 00:11:05.410 "num_base_bdevs_discovered": 3, 00:11:05.410 "num_base_bdevs_operational": 4, 00:11:05.410 "base_bdevs_list": [ 00:11:05.410 { 00:11:05.410 "name": "BaseBdev1", 00:11:05.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.410 "is_configured": false, 00:11:05.410 "data_offset": 0, 00:11:05.410 "data_size": 0 00:11:05.410 }, 00:11:05.410 { 00:11:05.410 "name": "BaseBdev2", 00:11:05.410 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:05.410 "is_configured": true, 00:11:05.410 "data_offset": 0, 00:11:05.410 "data_size": 65536 00:11:05.410 }, 00:11:05.410 { 00:11:05.410 "name": "BaseBdev3", 00:11:05.410 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:05.410 "is_configured": true, 00:11:05.410 "data_offset": 0, 00:11:05.410 "data_size": 65536 00:11:05.410 }, 00:11:05.410 { 00:11:05.410 "name": "BaseBdev4", 00:11:05.410 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:05.410 "is_configured": true, 00:11:05.410 "data_offset": 0, 00:11:05.410 "data_size": 65536 00:11:05.410 } 00:11:05.410 ] 00:11:05.410 }' 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.410 15:20:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.979 [2024-11-10 15:20:12.103368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.979 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.980 "name": "Existed_Raid", 00:11:05.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.980 "strip_size_kb": 64, 00:11:05.980 "state": "configuring", 00:11:05.980 "raid_level": "concat", 00:11:05.980 "superblock": false, 00:11:05.980 "num_base_bdevs": 4, 00:11:05.980 "num_base_bdevs_discovered": 2, 00:11:05.980 "num_base_bdevs_operational": 4, 00:11:05.980 "base_bdevs_list": [ 00:11:05.980 { 00:11:05.980 "name": "BaseBdev1", 00:11:05.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.980 "is_configured": false, 00:11:05.980 "data_offset": 0, 00:11:05.980 "data_size": 0 00:11:05.980 }, 00:11:05.980 { 00:11:05.980 "name": null, 00:11:05.980 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:05.980 "is_configured": false, 00:11:05.980 "data_offset": 0, 00:11:05.980 "data_size": 65536 00:11:05.980 }, 00:11:05.980 { 00:11:05.980 "name": "BaseBdev3", 00:11:05.980 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:05.980 "is_configured": true, 00:11:05.980 "data_offset": 0, 00:11:05.980 "data_size": 65536 00:11:05.980 }, 00:11:05.980 { 00:11:05.980 "name": "BaseBdev4", 00:11:05.980 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:05.980 "is_configured": true, 00:11:05.980 "data_offset": 0, 00:11:05.980 "data_size": 65536 00:11:05.980 } 00:11:05.980 ] 00:11:05.980 }' 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.980 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.239 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.239 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.239 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.239 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:06.239 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.499 [2024-11-10 15:20:12.638392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.499 BaseBdev1 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.499 [ 00:11:06.499 { 00:11:06.499 "name": "BaseBdev1", 00:11:06.499 "aliases": [ 00:11:06.499 "d2f570c8-63c4-48c2-8908-42173baf1c08" 00:11:06.499 ], 00:11:06.499 "product_name": "Malloc disk", 00:11:06.499 "block_size": 512, 00:11:06.499 "num_blocks": 65536, 00:11:06.499 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:06.499 "assigned_rate_limits": { 00:11:06.499 "rw_ios_per_sec": 0, 00:11:06.499 "rw_mbytes_per_sec": 0, 00:11:06.499 "r_mbytes_per_sec": 0, 00:11:06.499 "w_mbytes_per_sec": 0 00:11:06.499 }, 00:11:06.499 "claimed": true, 00:11:06.499 "claim_type": "exclusive_write", 00:11:06.499 "zoned": false, 00:11:06.499 "supported_io_types": { 00:11:06.499 "read": true, 00:11:06.499 "write": true, 00:11:06.499 "unmap": true, 00:11:06.499 "flush": true, 00:11:06.499 "reset": true, 00:11:06.499 "nvme_admin": false, 00:11:06.499 "nvme_io": false, 00:11:06.499 "nvme_io_md": false, 00:11:06.499 "write_zeroes": true, 00:11:06.499 "zcopy": true, 00:11:06.499 "get_zone_info": false, 00:11:06.499 "zone_management": false, 00:11:06.499 "zone_append": false, 00:11:06.499 "compare": false, 00:11:06.499 "compare_and_write": false, 00:11:06.499 "abort": true, 00:11:06.499 "seek_hole": false, 00:11:06.499 "seek_data": false, 00:11:06.499 "copy": true, 00:11:06.499 "nvme_iov_md": false 00:11:06.499 }, 00:11:06.499 "memory_domains": [ 00:11:06.499 { 00:11:06.499 "dma_device_id": "system", 00:11:06.499 "dma_device_type": 1 00:11:06.499 }, 00:11:06.499 { 00:11:06.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.499 "dma_device_type": 2 00:11:06.499 } 00:11:06.499 ], 00:11:06.499 "driver_specific": {} 00:11:06.499 } 00:11:06.499 ] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.499 "name": "Existed_Raid", 00:11:06.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.499 "strip_size_kb": 64, 00:11:06.499 "state": "configuring", 00:11:06.499 "raid_level": "concat", 00:11:06.499 "superblock": false, 00:11:06.499 "num_base_bdevs": 4, 00:11:06.499 "num_base_bdevs_discovered": 3, 00:11:06.499 "num_base_bdevs_operational": 4, 00:11:06.499 "base_bdevs_list": [ 00:11:06.499 { 00:11:06.499 "name": "BaseBdev1", 00:11:06.499 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:06.499 "is_configured": true, 00:11:06.499 "data_offset": 0, 00:11:06.499 "data_size": 65536 00:11:06.499 }, 00:11:06.499 { 00:11:06.499 "name": null, 00:11:06.499 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:06.499 "is_configured": false, 00:11:06.499 "data_offset": 0, 00:11:06.499 "data_size": 65536 00:11:06.499 }, 00:11:06.499 { 00:11:06.499 "name": "BaseBdev3", 00:11:06.499 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:06.499 "is_configured": true, 00:11:06.499 "data_offset": 0, 00:11:06.499 "data_size": 65536 00:11:06.499 }, 00:11:06.499 { 00:11:06.499 "name": "BaseBdev4", 00:11:06.499 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:06.499 "is_configured": true, 00:11:06.499 "data_offset": 0, 00:11:06.499 "data_size": 65536 00:11:06.499 } 00:11:06.499 ] 00:11:06.499 }' 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.499 15:20:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.068 [2024-11-10 15:20:13.186612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.068 "name": "Existed_Raid", 00:11:07.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.068 "strip_size_kb": 64, 00:11:07.068 "state": "configuring", 00:11:07.068 "raid_level": "concat", 00:11:07.068 "superblock": false, 00:11:07.068 "num_base_bdevs": 4, 00:11:07.068 "num_base_bdevs_discovered": 2, 00:11:07.068 "num_base_bdevs_operational": 4, 00:11:07.068 "base_bdevs_list": [ 00:11:07.068 { 00:11:07.068 "name": "BaseBdev1", 00:11:07.068 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:07.068 "is_configured": true, 00:11:07.068 "data_offset": 0, 00:11:07.068 "data_size": 65536 00:11:07.068 }, 00:11:07.068 { 00:11:07.068 "name": null, 00:11:07.068 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:07.068 "is_configured": false, 00:11:07.068 "data_offset": 0, 00:11:07.068 "data_size": 65536 00:11:07.068 }, 00:11:07.068 { 00:11:07.068 "name": null, 00:11:07.068 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:07.068 "is_configured": false, 00:11:07.068 "data_offset": 0, 00:11:07.068 "data_size": 65536 00:11:07.068 }, 00:11:07.068 { 00:11:07.068 "name": "BaseBdev4", 00:11:07.068 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:07.068 "is_configured": true, 00:11:07.068 "data_offset": 0, 00:11:07.068 "data_size": 65536 00:11:07.068 } 00:11:07.068 ] 00:11:07.068 }' 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.068 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.328 [2024-11-10 15:20:13.606783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.328 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.328 "name": "Existed_Raid", 00:11:07.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.328 "strip_size_kb": 64, 00:11:07.328 "state": "configuring", 00:11:07.328 "raid_level": "concat", 00:11:07.328 "superblock": false, 00:11:07.328 "num_base_bdevs": 4, 00:11:07.328 "num_base_bdevs_discovered": 3, 00:11:07.328 "num_base_bdevs_operational": 4, 00:11:07.328 "base_bdevs_list": [ 00:11:07.328 { 00:11:07.328 "name": "BaseBdev1", 00:11:07.328 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:07.328 "is_configured": true, 00:11:07.328 "data_offset": 0, 00:11:07.328 "data_size": 65536 00:11:07.328 }, 00:11:07.328 { 00:11:07.328 "name": null, 00:11:07.329 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:07.329 "is_configured": false, 00:11:07.329 "data_offset": 0, 00:11:07.329 "data_size": 65536 00:11:07.329 }, 00:11:07.329 { 00:11:07.329 "name": "BaseBdev3", 00:11:07.329 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:07.329 "is_configured": true, 00:11:07.329 "data_offset": 0, 00:11:07.329 "data_size": 65536 00:11:07.329 }, 00:11:07.329 { 00:11:07.329 "name": "BaseBdev4", 00:11:07.329 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:07.329 "is_configured": true, 00:11:07.329 "data_offset": 0, 00:11:07.329 "data_size": 65536 00:11:07.329 } 00:11:07.329 ] 00:11:07.329 }' 00:11:07.329 15:20:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.329 15:20:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 [2024-11-10 15:20:14.066907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.948 "name": "Existed_Raid", 00:11:07.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.948 "strip_size_kb": 64, 00:11:07.948 "state": "configuring", 00:11:07.948 "raid_level": "concat", 00:11:07.948 "superblock": false, 00:11:07.948 "num_base_bdevs": 4, 00:11:07.948 "num_base_bdevs_discovered": 2, 00:11:07.948 "num_base_bdevs_operational": 4, 00:11:07.948 "base_bdevs_list": [ 00:11:07.948 { 00:11:07.948 "name": null, 00:11:07.948 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:07.948 "is_configured": false, 00:11:07.948 "data_offset": 0, 00:11:07.948 "data_size": 65536 00:11:07.948 }, 00:11:07.948 { 00:11:07.948 "name": null, 00:11:07.948 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:07.948 "is_configured": false, 00:11:07.948 "data_offset": 0, 00:11:07.948 "data_size": 65536 00:11:07.948 }, 00:11:07.948 { 00:11:07.948 "name": "BaseBdev3", 00:11:07.948 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:07.948 "is_configured": true, 00:11:07.948 "data_offset": 0, 00:11:07.948 "data_size": 65536 00:11:07.948 }, 00:11:07.948 { 00:11:07.948 "name": "BaseBdev4", 00:11:07.948 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:07.948 "is_configured": true, 00:11:07.948 "data_offset": 0, 00:11:07.948 "data_size": 65536 00:11:07.948 } 00:11:07.948 ] 00:11:07.948 }' 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.948 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.208 [2024-11-10 15:20:14.545609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.208 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.467 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.467 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.467 "name": "Existed_Raid", 00:11:08.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.467 "strip_size_kb": 64, 00:11:08.467 "state": "configuring", 00:11:08.467 "raid_level": "concat", 00:11:08.467 "superblock": false, 00:11:08.467 "num_base_bdevs": 4, 00:11:08.467 "num_base_bdevs_discovered": 3, 00:11:08.467 "num_base_bdevs_operational": 4, 00:11:08.467 "base_bdevs_list": [ 00:11:08.467 { 00:11:08.467 "name": null, 00:11:08.467 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:08.467 "is_configured": false, 00:11:08.467 "data_offset": 0, 00:11:08.467 "data_size": 65536 00:11:08.467 }, 00:11:08.467 { 00:11:08.467 "name": "BaseBdev2", 00:11:08.467 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:08.467 "is_configured": true, 00:11:08.467 "data_offset": 0, 00:11:08.467 "data_size": 65536 00:11:08.467 }, 00:11:08.467 { 00:11:08.467 "name": "BaseBdev3", 00:11:08.467 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:08.467 "is_configured": true, 00:11:08.467 "data_offset": 0, 00:11:08.467 "data_size": 65536 00:11:08.467 }, 00:11:08.467 { 00:11:08.467 "name": "BaseBdev4", 00:11:08.467 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:08.467 "is_configured": true, 00:11:08.467 "data_offset": 0, 00:11:08.467 "data_size": 65536 00:11:08.467 } 00:11:08.467 ] 00:11:08.467 }' 00:11:08.467 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.467 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.727 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.727 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.727 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.727 15:20:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:08.727 15:20:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.727 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:08.727 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.727 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.727 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.727 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:08.727 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.728 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d2f570c8-63c4-48c2-8908-42173baf1c08 00:11:08.728 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.728 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.728 [2024-11-10 15:20:15.084686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:08.728 [2024-11-10 15:20:15.084783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:08.728 [2024-11-10 15:20:15.084812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:08.728 [2024-11-10 15:20:15.085122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:08.728 [2024-11-10 15:20:15.085286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:08.728 [2024-11-10 15:20:15.085327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:08.728 [2024-11-10 15:20:15.085540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.728 NewBaseBdev 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.988 [ 00:11:08.988 { 00:11:08.988 "name": "NewBaseBdev", 00:11:08.988 "aliases": [ 00:11:08.988 "d2f570c8-63c4-48c2-8908-42173baf1c08" 00:11:08.988 ], 00:11:08.988 "product_name": "Malloc disk", 00:11:08.988 "block_size": 512, 00:11:08.988 "num_blocks": 65536, 00:11:08.988 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:08.988 "assigned_rate_limits": { 00:11:08.988 "rw_ios_per_sec": 0, 00:11:08.988 "rw_mbytes_per_sec": 0, 00:11:08.988 "r_mbytes_per_sec": 0, 00:11:08.988 "w_mbytes_per_sec": 0 00:11:08.988 }, 00:11:08.988 "claimed": true, 00:11:08.988 "claim_type": "exclusive_write", 00:11:08.988 "zoned": false, 00:11:08.988 "supported_io_types": { 00:11:08.988 "read": true, 00:11:08.988 "write": true, 00:11:08.988 "unmap": true, 00:11:08.988 "flush": true, 00:11:08.988 "reset": true, 00:11:08.988 "nvme_admin": false, 00:11:08.988 "nvme_io": false, 00:11:08.988 "nvme_io_md": false, 00:11:08.988 "write_zeroes": true, 00:11:08.988 "zcopy": true, 00:11:08.988 "get_zone_info": false, 00:11:08.988 "zone_management": false, 00:11:08.988 "zone_append": false, 00:11:08.988 "compare": false, 00:11:08.988 "compare_and_write": false, 00:11:08.988 "abort": true, 00:11:08.988 "seek_hole": false, 00:11:08.988 "seek_data": false, 00:11:08.988 "copy": true, 00:11:08.988 "nvme_iov_md": false 00:11:08.988 }, 00:11:08.988 "memory_domains": [ 00:11:08.988 { 00:11:08.988 "dma_device_id": "system", 00:11:08.988 "dma_device_type": 1 00:11:08.988 }, 00:11:08.988 { 00:11:08.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.988 "dma_device_type": 2 00:11:08.988 } 00:11:08.988 ], 00:11:08.988 "driver_specific": {} 00:11:08.988 } 00:11:08.988 ] 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.988 "name": "Existed_Raid", 00:11:08.988 "uuid": "6810f9c6-d68a-42a3-8a5d-29938645a48b", 00:11:08.988 "strip_size_kb": 64, 00:11:08.988 "state": "online", 00:11:08.988 "raid_level": "concat", 00:11:08.988 "superblock": false, 00:11:08.988 "num_base_bdevs": 4, 00:11:08.988 "num_base_bdevs_discovered": 4, 00:11:08.988 "num_base_bdevs_operational": 4, 00:11:08.988 "base_bdevs_list": [ 00:11:08.988 { 00:11:08.988 "name": "NewBaseBdev", 00:11:08.988 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:08.988 "is_configured": true, 00:11:08.988 "data_offset": 0, 00:11:08.988 "data_size": 65536 00:11:08.988 }, 00:11:08.988 { 00:11:08.988 "name": "BaseBdev2", 00:11:08.988 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:08.988 "is_configured": true, 00:11:08.988 "data_offset": 0, 00:11:08.988 "data_size": 65536 00:11:08.988 }, 00:11:08.988 { 00:11:08.988 "name": "BaseBdev3", 00:11:08.988 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:08.988 "is_configured": true, 00:11:08.988 "data_offset": 0, 00:11:08.988 "data_size": 65536 00:11:08.988 }, 00:11:08.988 { 00:11:08.988 "name": "BaseBdev4", 00:11:08.988 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:08.988 "is_configured": true, 00:11:08.988 "data_offset": 0, 00:11:08.988 "data_size": 65536 00:11:08.988 } 00:11:08.988 ] 00:11:08.988 }' 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.988 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.248 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.508 [2024-11-10 15:20:15.613251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.508 "name": "Existed_Raid", 00:11:09.508 "aliases": [ 00:11:09.508 "6810f9c6-d68a-42a3-8a5d-29938645a48b" 00:11:09.508 ], 00:11:09.508 "product_name": "Raid Volume", 00:11:09.508 "block_size": 512, 00:11:09.508 "num_blocks": 262144, 00:11:09.508 "uuid": "6810f9c6-d68a-42a3-8a5d-29938645a48b", 00:11:09.508 "assigned_rate_limits": { 00:11:09.508 "rw_ios_per_sec": 0, 00:11:09.508 "rw_mbytes_per_sec": 0, 00:11:09.508 "r_mbytes_per_sec": 0, 00:11:09.508 "w_mbytes_per_sec": 0 00:11:09.508 }, 00:11:09.508 "claimed": false, 00:11:09.508 "zoned": false, 00:11:09.508 "supported_io_types": { 00:11:09.508 "read": true, 00:11:09.508 "write": true, 00:11:09.508 "unmap": true, 00:11:09.508 "flush": true, 00:11:09.508 "reset": true, 00:11:09.508 "nvme_admin": false, 00:11:09.508 "nvme_io": false, 00:11:09.508 "nvme_io_md": false, 00:11:09.508 "write_zeroes": true, 00:11:09.508 "zcopy": false, 00:11:09.508 "get_zone_info": false, 00:11:09.508 "zone_management": false, 00:11:09.508 "zone_append": false, 00:11:09.508 "compare": false, 00:11:09.508 "compare_and_write": false, 00:11:09.508 "abort": false, 00:11:09.508 "seek_hole": false, 00:11:09.508 "seek_data": false, 00:11:09.508 "copy": false, 00:11:09.508 "nvme_iov_md": false 00:11:09.508 }, 00:11:09.508 "memory_domains": [ 00:11:09.508 { 00:11:09.508 "dma_device_id": "system", 00:11:09.508 "dma_device_type": 1 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.508 "dma_device_type": 2 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "system", 00:11:09.508 "dma_device_type": 1 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.508 "dma_device_type": 2 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "system", 00:11:09.508 "dma_device_type": 1 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.508 "dma_device_type": 2 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "system", 00:11:09.508 "dma_device_type": 1 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.508 "dma_device_type": 2 00:11:09.508 } 00:11:09.508 ], 00:11:09.508 "driver_specific": { 00:11:09.508 "raid": { 00:11:09.508 "uuid": "6810f9c6-d68a-42a3-8a5d-29938645a48b", 00:11:09.508 "strip_size_kb": 64, 00:11:09.508 "state": "online", 00:11:09.508 "raid_level": "concat", 00:11:09.508 "superblock": false, 00:11:09.508 "num_base_bdevs": 4, 00:11:09.508 "num_base_bdevs_discovered": 4, 00:11:09.508 "num_base_bdevs_operational": 4, 00:11:09.508 "base_bdevs_list": [ 00:11:09.508 { 00:11:09.508 "name": "NewBaseBdev", 00:11:09.508 "uuid": "d2f570c8-63c4-48c2-8908-42173baf1c08", 00:11:09.508 "is_configured": true, 00:11:09.508 "data_offset": 0, 00:11:09.508 "data_size": 65536 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "name": "BaseBdev2", 00:11:09.508 "uuid": "4e2a8853-1c11-474d-8ff0-a00300721e88", 00:11:09.508 "is_configured": true, 00:11:09.508 "data_offset": 0, 00:11:09.508 "data_size": 65536 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "name": "BaseBdev3", 00:11:09.508 "uuid": "ce31f09d-73b8-47e4-8a7e-5a317140621b", 00:11:09.508 "is_configured": true, 00:11:09.508 "data_offset": 0, 00:11:09.508 "data_size": 65536 00:11:09.508 }, 00:11:09.508 { 00:11:09.508 "name": "BaseBdev4", 00:11:09.508 "uuid": "2b4d20b0-7c8e-49b3-84a2-3545384f13bf", 00:11:09.508 "is_configured": true, 00:11:09.508 "data_offset": 0, 00:11:09.508 "data_size": 65536 00:11:09.508 } 00:11:09.508 ] 00:11:09.508 } 00:11:09.508 } 00:11:09.508 }' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:09.508 BaseBdev2 00:11:09.508 BaseBdev3 00:11:09.508 BaseBdev4' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.508 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.509 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.769 [2024-11-10 15:20:15.904948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:09.769 [2024-11-10 15:20:15.904976] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.769 [2024-11-10 15:20:15.905060] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.769 [2024-11-10 15:20:15.905125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.769 [2024-11-10 15:20:15.905141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83555 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 83555 ']' 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 83555 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 83555 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 83555' 00:11:09.769 killing process with pid 83555 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 83555 00:11:09.769 [2024-11-10 15:20:15.948926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.769 15:20:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 83555 00:11:09.769 [2024-11-10 15:20:15.989860] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:10.029 00:11:10.029 real 0m9.607s 00:11:10.029 user 0m16.444s 00:11:10.029 sys 0m2.044s 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 ************************************ 00:11:10.029 END TEST raid_state_function_test 00:11:10.029 ************************************ 00:11:10.029 15:20:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:10.029 15:20:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:10.029 15:20:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:10.029 15:20:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 ************************************ 00:11:10.029 START TEST raid_state_function_test_sb 00:11:10.029 ************************************ 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test concat 4 true 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84205 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84205' 00:11:10.029 Process raid pid: 84205 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84205 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 84205 ']' 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:10.029 15:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.029 [2024-11-10 15:20:16.370326] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:10.029 [2024-11-10 15:20:16.370470] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.289 [2024-11-10 15:20:16.502301] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:10.289 [2024-11-10 15:20:16.525120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.289 [2024-11-10 15:20:16.552932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.289 [2024-11-10 15:20:16.597387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.289 [2024-11-10 15:20:16.597423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:10.859 [2024-11-10 15:20:17.212855] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.859 [2024-11-10 15:20:17.212915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.859 [2024-11-10 15:20:17.212938] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.859 [2024-11-10 15:20:17.212962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.859 [2024-11-10 15:20:17.212972] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.859 [2024-11-10 15:20:17.212982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.859 [2024-11-10 15:20:17.212990] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.859 [2024-11-10 15:20:17.212998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.859 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.118 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.118 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.119 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.119 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.119 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.119 "name": "Existed_Raid", 00:11:11.119 "uuid": "5318d011-56fe-448c-9aef-0e51f3ee2868", 00:11:11.119 "strip_size_kb": 64, 00:11:11.119 "state": "configuring", 00:11:11.119 "raid_level": "concat", 00:11:11.119 "superblock": true, 00:11:11.119 "num_base_bdevs": 4, 00:11:11.119 "num_base_bdevs_discovered": 0, 00:11:11.119 "num_base_bdevs_operational": 4, 00:11:11.119 "base_bdevs_list": [ 00:11:11.119 { 00:11:11.119 "name": "BaseBdev1", 00:11:11.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.119 "is_configured": false, 00:11:11.119 "data_offset": 0, 00:11:11.119 "data_size": 0 00:11:11.119 }, 00:11:11.119 { 00:11:11.119 "name": "BaseBdev2", 00:11:11.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.119 "is_configured": false, 00:11:11.119 "data_offset": 0, 00:11:11.119 "data_size": 0 00:11:11.119 }, 00:11:11.119 { 00:11:11.119 "name": "BaseBdev3", 00:11:11.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.119 "is_configured": false, 00:11:11.119 "data_offset": 0, 00:11:11.119 "data_size": 0 00:11:11.119 }, 00:11:11.119 { 00:11:11.119 "name": "BaseBdev4", 00:11:11.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.119 "is_configured": false, 00:11:11.119 "data_offset": 0, 00:11:11.119 "data_size": 0 00:11:11.119 } 00:11:11.119 ] 00:11:11.119 }' 00:11:11.119 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.119 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 [2024-11-10 15:20:17.616874] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.379 [2024-11-10 15:20:17.616922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 [2024-11-10 15:20:17.628934] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.379 [2024-11-10 15:20:17.628993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.379 [2024-11-10 15:20:17.629004] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.379 [2024-11-10 15:20:17.629025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.379 [2024-11-10 15:20:17.629034] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.379 [2024-11-10 15:20:17.629041] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.379 [2024-11-10 15:20:17.629049] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.379 [2024-11-10 15:20:17.629056] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 [2024-11-10 15:20:17.650027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.379 BaseBdev1 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.379 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.380 [ 00:11:11.380 { 00:11:11.380 "name": "BaseBdev1", 00:11:11.380 "aliases": [ 00:11:11.380 "1946678f-13bf-4a75-8a89-498f593d5c88" 00:11:11.380 ], 00:11:11.380 "product_name": "Malloc disk", 00:11:11.380 "block_size": 512, 00:11:11.380 "num_blocks": 65536, 00:11:11.380 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:11.380 "assigned_rate_limits": { 00:11:11.380 "rw_ios_per_sec": 0, 00:11:11.380 "rw_mbytes_per_sec": 0, 00:11:11.380 "r_mbytes_per_sec": 0, 00:11:11.380 "w_mbytes_per_sec": 0 00:11:11.380 }, 00:11:11.380 "claimed": true, 00:11:11.380 "claim_type": "exclusive_write", 00:11:11.380 "zoned": false, 00:11:11.380 "supported_io_types": { 00:11:11.380 "read": true, 00:11:11.380 "write": true, 00:11:11.380 "unmap": true, 00:11:11.380 "flush": true, 00:11:11.380 "reset": true, 00:11:11.380 "nvme_admin": false, 00:11:11.380 "nvme_io": false, 00:11:11.380 "nvme_io_md": false, 00:11:11.380 "write_zeroes": true, 00:11:11.380 "zcopy": true, 00:11:11.380 "get_zone_info": false, 00:11:11.380 "zone_management": false, 00:11:11.380 "zone_append": false, 00:11:11.380 "compare": false, 00:11:11.380 "compare_and_write": false, 00:11:11.380 "abort": true, 00:11:11.380 "seek_hole": false, 00:11:11.380 "seek_data": false, 00:11:11.380 "copy": true, 00:11:11.380 "nvme_iov_md": false 00:11:11.380 }, 00:11:11.380 "memory_domains": [ 00:11:11.380 { 00:11:11.380 "dma_device_id": "system", 00:11:11.380 "dma_device_type": 1 00:11:11.380 }, 00:11:11.380 { 00:11:11.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.380 "dma_device_type": 2 00:11:11.380 } 00:11:11.380 ], 00:11:11.380 "driver_specific": {} 00:11:11.380 } 00:11:11.380 ] 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.380 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.639 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.640 "name": "Existed_Raid", 00:11:11.640 "uuid": "4e01969d-e56b-4ade-b7e3-937348f9f7dc", 00:11:11.640 "strip_size_kb": 64, 00:11:11.640 "state": "configuring", 00:11:11.640 "raid_level": "concat", 00:11:11.640 "superblock": true, 00:11:11.640 "num_base_bdevs": 4, 00:11:11.640 "num_base_bdevs_discovered": 1, 00:11:11.640 "num_base_bdevs_operational": 4, 00:11:11.640 "base_bdevs_list": [ 00:11:11.640 { 00:11:11.640 "name": "BaseBdev1", 00:11:11.640 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:11.640 "is_configured": true, 00:11:11.640 "data_offset": 2048, 00:11:11.640 "data_size": 63488 00:11:11.640 }, 00:11:11.640 { 00:11:11.640 "name": "BaseBdev2", 00:11:11.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.640 "is_configured": false, 00:11:11.640 "data_offset": 0, 00:11:11.640 "data_size": 0 00:11:11.640 }, 00:11:11.640 { 00:11:11.640 "name": "BaseBdev3", 00:11:11.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.640 "is_configured": false, 00:11:11.640 "data_offset": 0, 00:11:11.640 "data_size": 0 00:11:11.640 }, 00:11:11.640 { 00:11:11.640 "name": "BaseBdev4", 00:11:11.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.640 "is_configured": false, 00:11:11.640 "data_offset": 0, 00:11:11.640 "data_size": 0 00:11:11.640 } 00:11:11.640 ] 00:11:11.640 }' 00:11:11.640 15:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.640 15:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.899 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.899 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.899 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.899 [2024-11-10 15:20:18.126220] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.899 [2024-11-10 15:20:18.126286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:11.899 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.900 [2024-11-10 15:20:18.138260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.900 [2024-11-10 15:20:18.140135] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.900 [2024-11-10 15:20:18.140176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.900 [2024-11-10 15:20:18.140187] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.900 [2024-11-10 15:20:18.140195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.900 [2024-11-10 15:20:18.140202] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.900 [2024-11-10 15:20:18.140209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.900 "name": "Existed_Raid", 00:11:11.900 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:11.900 "strip_size_kb": 64, 00:11:11.900 "state": "configuring", 00:11:11.900 "raid_level": "concat", 00:11:11.900 "superblock": true, 00:11:11.900 "num_base_bdevs": 4, 00:11:11.900 "num_base_bdevs_discovered": 1, 00:11:11.900 "num_base_bdevs_operational": 4, 00:11:11.900 "base_bdevs_list": [ 00:11:11.900 { 00:11:11.900 "name": "BaseBdev1", 00:11:11.900 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:11.900 "is_configured": true, 00:11:11.900 "data_offset": 2048, 00:11:11.900 "data_size": 63488 00:11:11.900 }, 00:11:11.900 { 00:11:11.900 "name": "BaseBdev2", 00:11:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.900 "is_configured": false, 00:11:11.900 "data_offset": 0, 00:11:11.900 "data_size": 0 00:11:11.900 }, 00:11:11.900 { 00:11:11.900 "name": "BaseBdev3", 00:11:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.900 "is_configured": false, 00:11:11.900 "data_offset": 0, 00:11:11.900 "data_size": 0 00:11:11.900 }, 00:11:11.900 { 00:11:11.900 "name": "BaseBdev4", 00:11:11.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.900 "is_configured": false, 00:11:11.900 "data_offset": 0, 00:11:11.900 "data_size": 0 00:11:11.900 } 00:11:11.900 ] 00:11:11.900 }' 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.900 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.469 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.470 [2024-11-10 15:20:18.545396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.470 BaseBdev2 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.470 [ 00:11:12.470 { 00:11:12.470 "name": "BaseBdev2", 00:11:12.470 "aliases": [ 00:11:12.470 "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07" 00:11:12.470 ], 00:11:12.470 "product_name": "Malloc disk", 00:11:12.470 "block_size": 512, 00:11:12.470 "num_blocks": 65536, 00:11:12.470 "uuid": "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07", 00:11:12.470 "assigned_rate_limits": { 00:11:12.470 "rw_ios_per_sec": 0, 00:11:12.470 "rw_mbytes_per_sec": 0, 00:11:12.470 "r_mbytes_per_sec": 0, 00:11:12.470 "w_mbytes_per_sec": 0 00:11:12.470 }, 00:11:12.470 "claimed": true, 00:11:12.470 "claim_type": "exclusive_write", 00:11:12.470 "zoned": false, 00:11:12.470 "supported_io_types": { 00:11:12.470 "read": true, 00:11:12.470 "write": true, 00:11:12.470 "unmap": true, 00:11:12.470 "flush": true, 00:11:12.470 "reset": true, 00:11:12.470 "nvme_admin": false, 00:11:12.470 "nvme_io": false, 00:11:12.470 "nvme_io_md": false, 00:11:12.470 "write_zeroes": true, 00:11:12.470 "zcopy": true, 00:11:12.470 "get_zone_info": false, 00:11:12.470 "zone_management": false, 00:11:12.470 "zone_append": false, 00:11:12.470 "compare": false, 00:11:12.470 "compare_and_write": false, 00:11:12.470 "abort": true, 00:11:12.470 "seek_hole": false, 00:11:12.470 "seek_data": false, 00:11:12.470 "copy": true, 00:11:12.470 "nvme_iov_md": false 00:11:12.470 }, 00:11:12.470 "memory_domains": [ 00:11:12.470 { 00:11:12.470 "dma_device_id": "system", 00:11:12.470 "dma_device_type": 1 00:11:12.470 }, 00:11:12.470 { 00:11:12.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.470 "dma_device_type": 2 00:11:12.470 } 00:11:12.470 ], 00:11:12.470 "driver_specific": {} 00:11:12.470 } 00:11:12.470 ] 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.470 "name": "Existed_Raid", 00:11:12.470 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:12.470 "strip_size_kb": 64, 00:11:12.470 "state": "configuring", 00:11:12.470 "raid_level": "concat", 00:11:12.470 "superblock": true, 00:11:12.470 "num_base_bdevs": 4, 00:11:12.470 "num_base_bdevs_discovered": 2, 00:11:12.470 "num_base_bdevs_operational": 4, 00:11:12.470 "base_bdevs_list": [ 00:11:12.470 { 00:11:12.470 "name": "BaseBdev1", 00:11:12.470 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:12.470 "is_configured": true, 00:11:12.470 "data_offset": 2048, 00:11:12.470 "data_size": 63488 00:11:12.470 }, 00:11:12.470 { 00:11:12.470 "name": "BaseBdev2", 00:11:12.470 "uuid": "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07", 00:11:12.470 "is_configured": true, 00:11:12.470 "data_offset": 2048, 00:11:12.470 "data_size": 63488 00:11:12.470 }, 00:11:12.470 { 00:11:12.470 "name": "BaseBdev3", 00:11:12.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.470 "is_configured": false, 00:11:12.470 "data_offset": 0, 00:11:12.470 "data_size": 0 00:11:12.470 }, 00:11:12.470 { 00:11:12.470 "name": "BaseBdev4", 00:11:12.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.470 "is_configured": false, 00:11:12.470 "data_offset": 0, 00:11:12.470 "data_size": 0 00:11:12.470 } 00:11:12.470 ] 00:11:12.470 }' 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.470 15:20:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.730 [2024-11-10 15:20:19.038471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.730 BaseBdev3 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.730 [ 00:11:12.730 { 00:11:12.730 "name": "BaseBdev3", 00:11:12.730 "aliases": [ 00:11:12.730 "10c81163-f976-4d92-a2d2-e9419caed2e3" 00:11:12.730 ], 00:11:12.730 "product_name": "Malloc disk", 00:11:12.730 "block_size": 512, 00:11:12.730 "num_blocks": 65536, 00:11:12.730 "uuid": "10c81163-f976-4d92-a2d2-e9419caed2e3", 00:11:12.730 "assigned_rate_limits": { 00:11:12.730 "rw_ios_per_sec": 0, 00:11:12.730 "rw_mbytes_per_sec": 0, 00:11:12.730 "r_mbytes_per_sec": 0, 00:11:12.730 "w_mbytes_per_sec": 0 00:11:12.730 }, 00:11:12.730 "claimed": true, 00:11:12.730 "claim_type": "exclusive_write", 00:11:12.730 "zoned": false, 00:11:12.730 "supported_io_types": { 00:11:12.730 "read": true, 00:11:12.730 "write": true, 00:11:12.730 "unmap": true, 00:11:12.730 "flush": true, 00:11:12.730 "reset": true, 00:11:12.730 "nvme_admin": false, 00:11:12.730 "nvme_io": false, 00:11:12.730 "nvme_io_md": false, 00:11:12.730 "write_zeroes": true, 00:11:12.730 "zcopy": true, 00:11:12.730 "get_zone_info": false, 00:11:12.730 "zone_management": false, 00:11:12.730 "zone_append": false, 00:11:12.730 "compare": false, 00:11:12.730 "compare_and_write": false, 00:11:12.730 "abort": true, 00:11:12.730 "seek_hole": false, 00:11:12.730 "seek_data": false, 00:11:12.730 "copy": true, 00:11:12.730 "nvme_iov_md": false 00:11:12.730 }, 00:11:12.730 "memory_domains": [ 00:11:12.730 { 00:11:12.730 "dma_device_id": "system", 00:11:12.730 "dma_device_type": 1 00:11:12.730 }, 00:11:12.730 { 00:11:12.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.730 "dma_device_type": 2 00:11:12.730 } 00:11:12.730 ], 00:11:12.730 "driver_specific": {} 00:11:12.730 } 00:11:12.730 ] 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.730 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.731 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.731 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.990 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.990 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.990 "name": "Existed_Raid", 00:11:12.990 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:12.990 "strip_size_kb": 64, 00:11:12.990 "state": "configuring", 00:11:12.990 "raid_level": "concat", 00:11:12.990 "superblock": true, 00:11:12.990 "num_base_bdevs": 4, 00:11:12.990 "num_base_bdevs_discovered": 3, 00:11:12.990 "num_base_bdevs_operational": 4, 00:11:12.990 "base_bdevs_list": [ 00:11:12.990 { 00:11:12.990 "name": "BaseBdev1", 00:11:12.990 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:12.990 "is_configured": true, 00:11:12.990 "data_offset": 2048, 00:11:12.990 "data_size": 63488 00:11:12.990 }, 00:11:12.990 { 00:11:12.990 "name": "BaseBdev2", 00:11:12.990 "uuid": "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07", 00:11:12.990 "is_configured": true, 00:11:12.990 "data_offset": 2048, 00:11:12.990 "data_size": 63488 00:11:12.990 }, 00:11:12.990 { 00:11:12.990 "name": "BaseBdev3", 00:11:12.990 "uuid": "10c81163-f976-4d92-a2d2-e9419caed2e3", 00:11:12.990 "is_configured": true, 00:11:12.990 "data_offset": 2048, 00:11:12.990 "data_size": 63488 00:11:12.990 }, 00:11:12.990 { 00:11:12.990 "name": "BaseBdev4", 00:11:12.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.990 "is_configured": false, 00:11:12.990 "data_offset": 0, 00:11:12.990 "data_size": 0 00:11:12.990 } 00:11:12.990 ] 00:11:12.990 }' 00:11:12.990 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.990 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.251 [2024-11-10 15:20:19.473699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.251 [2024-11-10 15:20:19.473900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:13.251 [2024-11-10 15:20:19.473921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:13.251 [2024-11-10 15:20:19.474212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:13.251 BaseBdev4 00:11:13.251 [2024-11-10 15:20:19.474421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:13.251 [2024-11-10 15:20:19.474473] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:13.251 [2024-11-10 15:20:19.474640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.251 [ 00:11:13.251 { 00:11:13.251 "name": "BaseBdev4", 00:11:13.251 "aliases": [ 00:11:13.251 "d9fb4569-8658-4d2d-8ef1-22061415d03b" 00:11:13.251 ], 00:11:13.251 "product_name": "Malloc disk", 00:11:13.251 "block_size": 512, 00:11:13.251 "num_blocks": 65536, 00:11:13.251 "uuid": "d9fb4569-8658-4d2d-8ef1-22061415d03b", 00:11:13.251 "assigned_rate_limits": { 00:11:13.251 "rw_ios_per_sec": 0, 00:11:13.251 "rw_mbytes_per_sec": 0, 00:11:13.251 "r_mbytes_per_sec": 0, 00:11:13.251 "w_mbytes_per_sec": 0 00:11:13.251 }, 00:11:13.251 "claimed": true, 00:11:13.251 "claim_type": "exclusive_write", 00:11:13.251 "zoned": false, 00:11:13.251 "supported_io_types": { 00:11:13.251 "read": true, 00:11:13.251 "write": true, 00:11:13.251 "unmap": true, 00:11:13.251 "flush": true, 00:11:13.251 "reset": true, 00:11:13.251 "nvme_admin": false, 00:11:13.251 "nvme_io": false, 00:11:13.251 "nvme_io_md": false, 00:11:13.251 "write_zeroes": true, 00:11:13.251 "zcopy": true, 00:11:13.251 "get_zone_info": false, 00:11:13.251 "zone_management": false, 00:11:13.251 "zone_append": false, 00:11:13.251 "compare": false, 00:11:13.251 "compare_and_write": false, 00:11:13.251 "abort": true, 00:11:13.251 "seek_hole": false, 00:11:13.251 "seek_data": false, 00:11:13.251 "copy": true, 00:11:13.251 "nvme_iov_md": false 00:11:13.251 }, 00:11:13.251 "memory_domains": [ 00:11:13.251 { 00:11:13.251 "dma_device_id": "system", 00:11:13.251 "dma_device_type": 1 00:11:13.251 }, 00:11:13.251 { 00:11:13.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.251 "dma_device_type": 2 00:11:13.251 } 00:11:13.251 ], 00:11:13.251 "driver_specific": {} 00:11:13.251 } 00:11:13.251 ] 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:13.251 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.252 "name": "Existed_Raid", 00:11:13.252 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:13.252 "strip_size_kb": 64, 00:11:13.252 "state": "online", 00:11:13.252 "raid_level": "concat", 00:11:13.252 "superblock": true, 00:11:13.252 "num_base_bdevs": 4, 00:11:13.252 "num_base_bdevs_discovered": 4, 00:11:13.252 "num_base_bdevs_operational": 4, 00:11:13.252 "base_bdevs_list": [ 00:11:13.252 { 00:11:13.252 "name": "BaseBdev1", 00:11:13.252 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:13.252 "is_configured": true, 00:11:13.252 "data_offset": 2048, 00:11:13.252 "data_size": 63488 00:11:13.252 }, 00:11:13.252 { 00:11:13.252 "name": "BaseBdev2", 00:11:13.252 "uuid": "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07", 00:11:13.252 "is_configured": true, 00:11:13.252 "data_offset": 2048, 00:11:13.252 "data_size": 63488 00:11:13.252 }, 00:11:13.252 { 00:11:13.252 "name": "BaseBdev3", 00:11:13.252 "uuid": "10c81163-f976-4d92-a2d2-e9419caed2e3", 00:11:13.252 "is_configured": true, 00:11:13.252 "data_offset": 2048, 00:11:13.252 "data_size": 63488 00:11:13.252 }, 00:11:13.252 { 00:11:13.252 "name": "BaseBdev4", 00:11:13.252 "uuid": "d9fb4569-8658-4d2d-8ef1-22061415d03b", 00:11:13.252 "is_configured": true, 00:11:13.252 "data_offset": 2048, 00:11:13.252 "data_size": 63488 00:11:13.252 } 00:11:13.252 ] 00:11:13.252 }' 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.252 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.819 15:20:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.819 [2024-11-10 15:20:19.994233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.819 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.819 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.819 "name": "Existed_Raid", 00:11:13.819 "aliases": [ 00:11:13.819 "b4b651b4-dbd2-475e-a57a-128fc9d4256c" 00:11:13.819 ], 00:11:13.819 "product_name": "Raid Volume", 00:11:13.819 "block_size": 512, 00:11:13.819 "num_blocks": 253952, 00:11:13.819 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:13.819 "assigned_rate_limits": { 00:11:13.819 "rw_ios_per_sec": 0, 00:11:13.819 "rw_mbytes_per_sec": 0, 00:11:13.819 "r_mbytes_per_sec": 0, 00:11:13.819 "w_mbytes_per_sec": 0 00:11:13.819 }, 00:11:13.819 "claimed": false, 00:11:13.819 "zoned": false, 00:11:13.819 "supported_io_types": { 00:11:13.819 "read": true, 00:11:13.819 "write": true, 00:11:13.819 "unmap": true, 00:11:13.819 "flush": true, 00:11:13.819 "reset": true, 00:11:13.819 "nvme_admin": false, 00:11:13.819 "nvme_io": false, 00:11:13.819 "nvme_io_md": false, 00:11:13.819 "write_zeroes": true, 00:11:13.819 "zcopy": false, 00:11:13.819 "get_zone_info": false, 00:11:13.819 "zone_management": false, 00:11:13.819 "zone_append": false, 00:11:13.819 "compare": false, 00:11:13.819 "compare_and_write": false, 00:11:13.819 "abort": false, 00:11:13.819 "seek_hole": false, 00:11:13.819 "seek_data": false, 00:11:13.819 "copy": false, 00:11:13.819 "nvme_iov_md": false 00:11:13.819 }, 00:11:13.819 "memory_domains": [ 00:11:13.819 { 00:11:13.819 "dma_device_id": "system", 00:11:13.819 "dma_device_type": 1 00:11:13.819 }, 00:11:13.819 { 00:11:13.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.819 "dma_device_type": 2 00:11:13.819 }, 00:11:13.819 { 00:11:13.819 "dma_device_id": "system", 00:11:13.819 "dma_device_type": 1 00:11:13.819 }, 00:11:13.819 { 00:11:13.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.819 "dma_device_type": 2 00:11:13.819 }, 00:11:13.819 { 00:11:13.819 "dma_device_id": "system", 00:11:13.819 "dma_device_type": 1 00:11:13.819 }, 00:11:13.819 { 00:11:13.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.819 "dma_device_type": 2 00:11:13.820 }, 00:11:13.820 { 00:11:13.820 "dma_device_id": "system", 00:11:13.820 "dma_device_type": 1 00:11:13.820 }, 00:11:13.820 { 00:11:13.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.820 "dma_device_type": 2 00:11:13.820 } 00:11:13.820 ], 00:11:13.820 "driver_specific": { 00:11:13.820 "raid": { 00:11:13.820 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:13.820 "strip_size_kb": 64, 00:11:13.820 "state": "online", 00:11:13.820 "raid_level": "concat", 00:11:13.820 "superblock": true, 00:11:13.820 "num_base_bdevs": 4, 00:11:13.820 "num_base_bdevs_discovered": 4, 00:11:13.820 "num_base_bdevs_operational": 4, 00:11:13.820 "base_bdevs_list": [ 00:11:13.820 { 00:11:13.820 "name": "BaseBdev1", 00:11:13.820 "uuid": "1946678f-13bf-4a75-8a89-498f593d5c88", 00:11:13.820 "is_configured": true, 00:11:13.820 "data_offset": 2048, 00:11:13.820 "data_size": 63488 00:11:13.820 }, 00:11:13.820 { 00:11:13.820 "name": "BaseBdev2", 00:11:13.820 "uuid": "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07", 00:11:13.820 "is_configured": true, 00:11:13.820 "data_offset": 2048, 00:11:13.820 "data_size": 63488 00:11:13.820 }, 00:11:13.820 { 00:11:13.820 "name": "BaseBdev3", 00:11:13.820 "uuid": "10c81163-f976-4d92-a2d2-e9419caed2e3", 00:11:13.820 "is_configured": true, 00:11:13.820 "data_offset": 2048, 00:11:13.820 "data_size": 63488 00:11:13.820 }, 00:11:13.820 { 00:11:13.820 "name": "BaseBdev4", 00:11:13.820 "uuid": "d9fb4569-8658-4d2d-8ef1-22061415d03b", 00:11:13.820 "is_configured": true, 00:11:13.820 "data_offset": 2048, 00:11:13.820 "data_size": 63488 00:11:13.820 } 00:11:13.820 ] 00:11:13.820 } 00:11:13.820 } 00:11:13.820 }' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.820 BaseBdev2 00:11:13.820 BaseBdev3 00:11:13.820 BaseBdev4' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.820 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.079 [2024-11-10 15:20:20.310046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:14.079 [2024-11-10 15:20:20.310119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.079 [2024-11-10 15:20:20.310196] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.079 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.079 "name": "Existed_Raid", 00:11:14.079 "uuid": "b4b651b4-dbd2-475e-a57a-128fc9d4256c", 00:11:14.079 "strip_size_kb": 64, 00:11:14.079 "state": "offline", 00:11:14.079 "raid_level": "concat", 00:11:14.079 "superblock": true, 00:11:14.079 "num_base_bdevs": 4, 00:11:14.079 "num_base_bdevs_discovered": 3, 00:11:14.079 "num_base_bdevs_operational": 3, 00:11:14.079 "base_bdevs_list": [ 00:11:14.079 { 00:11:14.079 "name": null, 00:11:14.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.079 "is_configured": false, 00:11:14.079 "data_offset": 0, 00:11:14.079 "data_size": 63488 00:11:14.080 }, 00:11:14.080 { 00:11:14.080 "name": "BaseBdev2", 00:11:14.080 "uuid": "61fcb5e4-3c9f-4895-9f1c-73fd86fe1a07", 00:11:14.080 "is_configured": true, 00:11:14.080 "data_offset": 2048, 00:11:14.080 "data_size": 63488 00:11:14.080 }, 00:11:14.080 { 00:11:14.080 "name": "BaseBdev3", 00:11:14.080 "uuid": "10c81163-f976-4d92-a2d2-e9419caed2e3", 00:11:14.080 "is_configured": true, 00:11:14.080 "data_offset": 2048, 00:11:14.080 "data_size": 63488 00:11:14.080 }, 00:11:14.080 { 00:11:14.080 "name": "BaseBdev4", 00:11:14.080 "uuid": "d9fb4569-8658-4d2d-8ef1-22061415d03b", 00:11:14.080 "is_configured": true, 00:11:14.080 "data_offset": 2048, 00:11:14.080 "data_size": 63488 00:11:14.080 } 00:11:14.080 ] 00:11:14.080 }' 00:11:14.080 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.080 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.648 [2024-11-10 15:20:20.765629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.648 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 [2024-11-10 15:20:20.832923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 [2024-11-10 15:20:20.888278] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:14.649 [2024-11-10 15:20:20.888385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 BaseBdev2 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.649 [ 00:11:14.649 { 00:11:14.649 "name": "BaseBdev2", 00:11:14.649 "aliases": [ 00:11:14.649 "c606eca3-8f32-41f5-a858-30473b73c7b3" 00:11:14.649 ], 00:11:14.649 "product_name": "Malloc disk", 00:11:14.649 "block_size": 512, 00:11:14.649 "num_blocks": 65536, 00:11:14.649 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:14.649 "assigned_rate_limits": { 00:11:14.649 "rw_ios_per_sec": 0, 00:11:14.649 "rw_mbytes_per_sec": 0, 00:11:14.649 "r_mbytes_per_sec": 0, 00:11:14.649 "w_mbytes_per_sec": 0 00:11:14.649 }, 00:11:14.649 "claimed": false, 00:11:14.649 "zoned": false, 00:11:14.649 "supported_io_types": { 00:11:14.649 "read": true, 00:11:14.649 "write": true, 00:11:14.649 "unmap": true, 00:11:14.649 "flush": true, 00:11:14.649 "reset": true, 00:11:14.649 "nvme_admin": false, 00:11:14.649 "nvme_io": false, 00:11:14.649 "nvme_io_md": false, 00:11:14.649 "write_zeroes": true, 00:11:14.649 "zcopy": true, 00:11:14.649 "get_zone_info": false, 00:11:14.649 "zone_management": false, 00:11:14.649 "zone_append": false, 00:11:14.649 "compare": false, 00:11:14.649 "compare_and_write": false, 00:11:14.649 "abort": true, 00:11:14.649 "seek_hole": false, 00:11:14.649 "seek_data": false, 00:11:14.649 "copy": true, 00:11:14.649 "nvme_iov_md": false 00:11:14.649 }, 00:11:14.649 "memory_domains": [ 00:11:14.649 { 00:11:14.649 "dma_device_id": "system", 00:11:14.649 "dma_device_type": 1 00:11:14.649 }, 00:11:14.649 { 00:11:14.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.649 "dma_device_type": 2 00:11:14.649 } 00:11:14.649 ], 00:11:14.649 "driver_specific": {} 00:11:14.649 } 00:11:14.649 ] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.649 15:20:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.909 BaseBdev3 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.909 [ 00:11:14.909 { 00:11:14.909 "name": "BaseBdev3", 00:11:14.909 "aliases": [ 00:11:14.909 "80042467-f1b8-4722-920c-6f8fd26d5a20" 00:11:14.909 ], 00:11:14.909 "product_name": "Malloc disk", 00:11:14.909 "block_size": 512, 00:11:14.909 "num_blocks": 65536, 00:11:14.909 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:14.909 "assigned_rate_limits": { 00:11:14.909 "rw_ios_per_sec": 0, 00:11:14.909 "rw_mbytes_per_sec": 0, 00:11:14.909 "r_mbytes_per_sec": 0, 00:11:14.909 "w_mbytes_per_sec": 0 00:11:14.909 }, 00:11:14.909 "claimed": false, 00:11:14.909 "zoned": false, 00:11:14.909 "supported_io_types": { 00:11:14.909 "read": true, 00:11:14.909 "write": true, 00:11:14.909 "unmap": true, 00:11:14.909 "flush": true, 00:11:14.909 "reset": true, 00:11:14.909 "nvme_admin": false, 00:11:14.909 "nvme_io": false, 00:11:14.909 "nvme_io_md": false, 00:11:14.909 "write_zeroes": true, 00:11:14.909 "zcopy": true, 00:11:14.909 "get_zone_info": false, 00:11:14.909 "zone_management": false, 00:11:14.909 "zone_append": false, 00:11:14.909 "compare": false, 00:11:14.909 "compare_and_write": false, 00:11:14.909 "abort": true, 00:11:14.909 "seek_hole": false, 00:11:14.909 "seek_data": false, 00:11:14.909 "copy": true, 00:11:14.909 "nvme_iov_md": false 00:11:14.909 }, 00:11:14.909 "memory_domains": [ 00:11:14.909 { 00:11:14.909 "dma_device_id": "system", 00:11:14.909 "dma_device_type": 1 00:11:14.909 }, 00:11:14.909 { 00:11:14.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.909 "dma_device_type": 2 00:11:14.909 } 00:11:14.909 ], 00:11:14.909 "driver_specific": {} 00:11:14.909 } 00:11:14.909 ] 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.909 BaseBdev4 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:14.909 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.910 [ 00:11:14.910 { 00:11:14.910 "name": "BaseBdev4", 00:11:14.910 "aliases": [ 00:11:14.910 "60969c2b-fac1-4c74-9a78-ec4d983a7a89" 00:11:14.910 ], 00:11:14.910 "product_name": "Malloc disk", 00:11:14.910 "block_size": 512, 00:11:14.910 "num_blocks": 65536, 00:11:14.910 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:14.910 "assigned_rate_limits": { 00:11:14.910 "rw_ios_per_sec": 0, 00:11:14.910 "rw_mbytes_per_sec": 0, 00:11:14.910 "r_mbytes_per_sec": 0, 00:11:14.910 "w_mbytes_per_sec": 0 00:11:14.910 }, 00:11:14.910 "claimed": false, 00:11:14.910 "zoned": false, 00:11:14.910 "supported_io_types": { 00:11:14.910 "read": true, 00:11:14.910 "write": true, 00:11:14.910 "unmap": true, 00:11:14.910 "flush": true, 00:11:14.910 "reset": true, 00:11:14.910 "nvme_admin": false, 00:11:14.910 "nvme_io": false, 00:11:14.910 "nvme_io_md": false, 00:11:14.910 "write_zeroes": true, 00:11:14.910 "zcopy": true, 00:11:14.910 "get_zone_info": false, 00:11:14.910 "zone_management": false, 00:11:14.910 "zone_append": false, 00:11:14.910 "compare": false, 00:11:14.910 "compare_and_write": false, 00:11:14.910 "abort": true, 00:11:14.910 "seek_hole": false, 00:11:14.910 "seek_data": false, 00:11:14.910 "copy": true, 00:11:14.910 "nvme_iov_md": false 00:11:14.910 }, 00:11:14.910 "memory_domains": [ 00:11:14.910 { 00:11:14.910 "dma_device_id": "system", 00:11:14.910 "dma_device_type": 1 00:11:14.910 }, 00:11:14.910 { 00:11:14.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.910 "dma_device_type": 2 00:11:14.910 } 00:11:14.910 ], 00:11:14.910 "driver_specific": {} 00:11:14.910 } 00:11:14.910 ] 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.910 [2024-11-10 15:20:21.105751] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.910 [2024-11-10 15:20:21.105842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.910 [2024-11-10 15:20:21.105897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.910 [2024-11-10 15:20:21.107737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.910 [2024-11-10 15:20:21.107826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.910 "name": "Existed_Raid", 00:11:14.910 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:14.910 "strip_size_kb": 64, 00:11:14.910 "state": "configuring", 00:11:14.910 "raid_level": "concat", 00:11:14.910 "superblock": true, 00:11:14.910 "num_base_bdevs": 4, 00:11:14.910 "num_base_bdevs_discovered": 3, 00:11:14.910 "num_base_bdevs_operational": 4, 00:11:14.910 "base_bdevs_list": [ 00:11:14.910 { 00:11:14.910 "name": "BaseBdev1", 00:11:14.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.910 "is_configured": false, 00:11:14.910 "data_offset": 0, 00:11:14.910 "data_size": 0 00:11:14.910 }, 00:11:14.910 { 00:11:14.910 "name": "BaseBdev2", 00:11:14.910 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:14.910 "is_configured": true, 00:11:14.910 "data_offset": 2048, 00:11:14.910 "data_size": 63488 00:11:14.910 }, 00:11:14.910 { 00:11:14.910 "name": "BaseBdev3", 00:11:14.910 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:14.910 "is_configured": true, 00:11:14.910 "data_offset": 2048, 00:11:14.910 "data_size": 63488 00:11:14.910 }, 00:11:14.910 { 00:11:14.910 "name": "BaseBdev4", 00:11:14.910 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:14.910 "is_configured": true, 00:11:14.910 "data_offset": 2048, 00:11:14.910 "data_size": 63488 00:11:14.910 } 00:11:14.910 ] 00:11:14.910 }' 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.910 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.170 [2024-11-10 15:20:21.489851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.170 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.440 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.440 "name": "Existed_Raid", 00:11:15.440 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:15.440 "strip_size_kb": 64, 00:11:15.440 "state": "configuring", 00:11:15.440 "raid_level": "concat", 00:11:15.440 "superblock": true, 00:11:15.440 "num_base_bdevs": 4, 00:11:15.440 "num_base_bdevs_discovered": 2, 00:11:15.440 "num_base_bdevs_operational": 4, 00:11:15.440 "base_bdevs_list": [ 00:11:15.440 { 00:11:15.440 "name": "BaseBdev1", 00:11:15.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.440 "is_configured": false, 00:11:15.440 "data_offset": 0, 00:11:15.440 "data_size": 0 00:11:15.440 }, 00:11:15.440 { 00:11:15.440 "name": null, 00:11:15.440 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:15.440 "is_configured": false, 00:11:15.440 "data_offset": 0, 00:11:15.440 "data_size": 63488 00:11:15.440 }, 00:11:15.440 { 00:11:15.440 "name": "BaseBdev3", 00:11:15.440 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:15.440 "is_configured": true, 00:11:15.440 "data_offset": 2048, 00:11:15.440 "data_size": 63488 00:11:15.440 }, 00:11:15.440 { 00:11:15.440 "name": "BaseBdev4", 00:11:15.440 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:15.440 "is_configured": true, 00:11:15.440 "data_offset": 2048, 00:11:15.440 "data_size": 63488 00:11:15.440 } 00:11:15.440 ] 00:11:15.440 }' 00:11:15.440 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.440 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.715 [2024-11-10 15:20:21.996891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.715 BaseBdev1 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.715 15:20:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.715 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.715 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.715 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.715 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.715 [ 00:11:15.715 { 00:11:15.715 "name": "BaseBdev1", 00:11:15.715 "aliases": [ 00:11:15.715 "b6d0b2d3-6779-4e45-a9e7-1281159a1a64" 00:11:15.715 ], 00:11:15.715 "product_name": "Malloc disk", 00:11:15.715 "block_size": 512, 00:11:15.715 "num_blocks": 65536, 00:11:15.715 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:15.716 "assigned_rate_limits": { 00:11:15.716 "rw_ios_per_sec": 0, 00:11:15.716 "rw_mbytes_per_sec": 0, 00:11:15.716 "r_mbytes_per_sec": 0, 00:11:15.716 "w_mbytes_per_sec": 0 00:11:15.716 }, 00:11:15.716 "claimed": true, 00:11:15.716 "claim_type": "exclusive_write", 00:11:15.716 "zoned": false, 00:11:15.716 "supported_io_types": { 00:11:15.716 "read": true, 00:11:15.716 "write": true, 00:11:15.716 "unmap": true, 00:11:15.716 "flush": true, 00:11:15.716 "reset": true, 00:11:15.716 "nvme_admin": false, 00:11:15.716 "nvme_io": false, 00:11:15.716 "nvme_io_md": false, 00:11:15.716 "write_zeroes": true, 00:11:15.716 "zcopy": true, 00:11:15.716 "get_zone_info": false, 00:11:15.716 "zone_management": false, 00:11:15.716 "zone_append": false, 00:11:15.716 "compare": false, 00:11:15.716 "compare_and_write": false, 00:11:15.716 "abort": true, 00:11:15.716 "seek_hole": false, 00:11:15.716 "seek_data": false, 00:11:15.716 "copy": true, 00:11:15.716 "nvme_iov_md": false 00:11:15.716 }, 00:11:15.716 "memory_domains": [ 00:11:15.716 { 00:11:15.716 "dma_device_id": "system", 00:11:15.716 "dma_device_type": 1 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.716 "dma_device_type": 2 00:11:15.716 } 00:11:15.716 ], 00:11:15.716 "driver_specific": {} 00:11:15.716 } 00:11:15.716 ] 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.716 "name": "Existed_Raid", 00:11:15.716 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:15.716 "strip_size_kb": 64, 00:11:15.716 "state": "configuring", 00:11:15.716 "raid_level": "concat", 00:11:15.716 "superblock": true, 00:11:15.716 "num_base_bdevs": 4, 00:11:15.716 "num_base_bdevs_discovered": 3, 00:11:15.716 "num_base_bdevs_operational": 4, 00:11:15.716 "base_bdevs_list": [ 00:11:15.716 { 00:11:15.716 "name": "BaseBdev1", 00:11:15.716 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:15.716 "is_configured": true, 00:11:15.716 "data_offset": 2048, 00:11:15.716 "data_size": 63488 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "name": null, 00:11:15.716 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:15.716 "is_configured": false, 00:11:15.716 "data_offset": 0, 00:11:15.716 "data_size": 63488 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "name": "BaseBdev3", 00:11:15.716 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:15.716 "is_configured": true, 00:11:15.716 "data_offset": 2048, 00:11:15.716 "data_size": 63488 00:11:15.716 }, 00:11:15.716 { 00:11:15.716 "name": "BaseBdev4", 00:11:15.716 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:15.716 "is_configured": true, 00:11:15.716 "data_offset": 2048, 00:11:15.716 "data_size": 63488 00:11:15.716 } 00:11:15.716 ] 00:11:15.716 }' 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.716 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 [2024-11-10 15:20:22.505115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.286 "name": "Existed_Raid", 00:11:16.286 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:16.286 "strip_size_kb": 64, 00:11:16.286 "state": "configuring", 00:11:16.286 "raid_level": "concat", 00:11:16.286 "superblock": true, 00:11:16.286 "num_base_bdevs": 4, 00:11:16.286 "num_base_bdevs_discovered": 2, 00:11:16.286 "num_base_bdevs_operational": 4, 00:11:16.286 "base_bdevs_list": [ 00:11:16.286 { 00:11:16.286 "name": "BaseBdev1", 00:11:16.286 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:16.286 "is_configured": true, 00:11:16.286 "data_offset": 2048, 00:11:16.286 "data_size": 63488 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "name": null, 00:11:16.286 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:16.286 "is_configured": false, 00:11:16.286 "data_offset": 0, 00:11:16.286 "data_size": 63488 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "name": null, 00:11:16.286 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:16.286 "is_configured": false, 00:11:16.286 "data_offset": 0, 00:11:16.286 "data_size": 63488 00:11:16.286 }, 00:11:16.286 { 00:11:16.286 "name": "BaseBdev4", 00:11:16.286 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:16.286 "is_configured": true, 00:11:16.286 "data_offset": 2048, 00:11:16.286 "data_size": 63488 00:11:16.286 } 00:11:16.286 ] 00:11:16.286 }' 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.286 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.856 [2024-11-10 15:20:22.969306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.856 15:20:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.856 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.856 "name": "Existed_Raid", 00:11:16.856 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:16.856 "strip_size_kb": 64, 00:11:16.856 "state": "configuring", 00:11:16.856 "raid_level": "concat", 00:11:16.856 "superblock": true, 00:11:16.856 "num_base_bdevs": 4, 00:11:16.856 "num_base_bdevs_discovered": 3, 00:11:16.856 "num_base_bdevs_operational": 4, 00:11:16.856 "base_bdevs_list": [ 00:11:16.856 { 00:11:16.856 "name": "BaseBdev1", 00:11:16.856 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:16.856 "is_configured": true, 00:11:16.856 "data_offset": 2048, 00:11:16.856 "data_size": 63488 00:11:16.856 }, 00:11:16.856 { 00:11:16.856 "name": null, 00:11:16.856 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:16.856 "is_configured": false, 00:11:16.856 "data_offset": 0, 00:11:16.856 "data_size": 63488 00:11:16.856 }, 00:11:16.856 { 00:11:16.856 "name": "BaseBdev3", 00:11:16.856 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:16.856 "is_configured": true, 00:11:16.856 "data_offset": 2048, 00:11:16.856 "data_size": 63488 00:11:16.856 }, 00:11:16.856 { 00:11:16.856 "name": "BaseBdev4", 00:11:16.856 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:16.856 "is_configured": true, 00:11:16.856 "data_offset": 2048, 00:11:16.856 "data_size": 63488 00:11:16.856 } 00:11:16.856 ] 00:11:16.856 }' 00:11:16.856 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.856 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.116 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.375 [2024-11-10 15:20:23.481459] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.375 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.375 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.375 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.375 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.376 "name": "Existed_Raid", 00:11:17.376 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:17.376 "strip_size_kb": 64, 00:11:17.376 "state": "configuring", 00:11:17.376 "raid_level": "concat", 00:11:17.376 "superblock": true, 00:11:17.376 "num_base_bdevs": 4, 00:11:17.376 "num_base_bdevs_discovered": 2, 00:11:17.376 "num_base_bdevs_operational": 4, 00:11:17.376 "base_bdevs_list": [ 00:11:17.376 { 00:11:17.376 "name": null, 00:11:17.376 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:17.376 "is_configured": false, 00:11:17.376 "data_offset": 0, 00:11:17.376 "data_size": 63488 00:11:17.376 }, 00:11:17.376 { 00:11:17.376 "name": null, 00:11:17.376 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:17.376 "is_configured": false, 00:11:17.376 "data_offset": 0, 00:11:17.376 "data_size": 63488 00:11:17.376 }, 00:11:17.376 { 00:11:17.376 "name": "BaseBdev3", 00:11:17.376 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:17.376 "is_configured": true, 00:11:17.376 "data_offset": 2048, 00:11:17.376 "data_size": 63488 00:11:17.376 }, 00:11:17.376 { 00:11:17.376 "name": "BaseBdev4", 00:11:17.376 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:17.376 "is_configured": true, 00:11:17.376 "data_offset": 2048, 00:11:17.376 "data_size": 63488 00:11:17.376 } 00:11:17.376 ] 00:11:17.376 }' 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.376 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.636 [2024-11-10 15:20:23.948212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.636 15:20:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.896 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.896 "name": "Existed_Raid", 00:11:17.896 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:17.896 "strip_size_kb": 64, 00:11:17.896 "state": "configuring", 00:11:17.896 "raid_level": "concat", 00:11:17.896 "superblock": true, 00:11:17.896 "num_base_bdevs": 4, 00:11:17.896 "num_base_bdevs_discovered": 3, 00:11:17.896 "num_base_bdevs_operational": 4, 00:11:17.896 "base_bdevs_list": [ 00:11:17.896 { 00:11:17.896 "name": null, 00:11:17.896 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:17.896 "is_configured": false, 00:11:17.896 "data_offset": 0, 00:11:17.896 "data_size": 63488 00:11:17.896 }, 00:11:17.896 { 00:11:17.896 "name": "BaseBdev2", 00:11:17.896 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:17.896 "is_configured": true, 00:11:17.896 "data_offset": 2048, 00:11:17.896 "data_size": 63488 00:11:17.896 }, 00:11:17.896 { 00:11:17.896 "name": "BaseBdev3", 00:11:17.896 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:17.896 "is_configured": true, 00:11:17.896 "data_offset": 2048, 00:11:17.896 "data_size": 63488 00:11:17.896 }, 00:11:17.896 { 00:11:17.896 "name": "BaseBdev4", 00:11:17.896 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:17.896 "is_configured": true, 00:11:17.896 "data_offset": 2048, 00:11:17.896 "data_size": 63488 00:11:17.896 } 00:11:17.896 ] 00:11:17.896 }' 00:11:17.896 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.896 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b6d0b2d3-6779-4e45-a9e7-1281159a1a64 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 [2024-11-10 15:20:24.451332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.155 NewBaseBdev 00:11:18.155 [2024-11-10 15:20:24.451583] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.155 [2024-11-10 15:20:24.451607] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.155 [2024-11-10 15:20:24.451856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:18.155 [2024-11-10 15:20:24.451969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.155 [2024-11-10 15:20:24.451979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:18.155 [2024-11-10 15:20:24.452086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.155 [ 00:11:18.155 { 00:11:18.155 "name": "NewBaseBdev", 00:11:18.155 "aliases": [ 00:11:18.155 "b6d0b2d3-6779-4e45-a9e7-1281159a1a64" 00:11:18.155 ], 00:11:18.155 "product_name": "Malloc disk", 00:11:18.155 "block_size": 512, 00:11:18.155 "num_blocks": 65536, 00:11:18.155 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:18.155 "assigned_rate_limits": { 00:11:18.155 "rw_ios_per_sec": 0, 00:11:18.155 "rw_mbytes_per_sec": 0, 00:11:18.155 "r_mbytes_per_sec": 0, 00:11:18.155 "w_mbytes_per_sec": 0 00:11:18.155 }, 00:11:18.155 "claimed": true, 00:11:18.155 "claim_type": "exclusive_write", 00:11:18.155 "zoned": false, 00:11:18.155 "supported_io_types": { 00:11:18.155 "read": true, 00:11:18.155 "write": true, 00:11:18.155 "unmap": true, 00:11:18.155 "flush": true, 00:11:18.155 "reset": true, 00:11:18.155 "nvme_admin": false, 00:11:18.155 "nvme_io": false, 00:11:18.155 "nvme_io_md": false, 00:11:18.155 "write_zeroes": true, 00:11:18.155 "zcopy": true, 00:11:18.155 "get_zone_info": false, 00:11:18.155 "zone_management": false, 00:11:18.155 "zone_append": false, 00:11:18.155 "compare": false, 00:11:18.155 "compare_and_write": false, 00:11:18.155 "abort": true, 00:11:18.155 "seek_hole": false, 00:11:18.155 "seek_data": false, 00:11:18.155 "copy": true, 00:11:18.155 "nvme_iov_md": false 00:11:18.155 }, 00:11:18.155 "memory_domains": [ 00:11:18.155 { 00:11:18.155 "dma_device_id": "system", 00:11:18.155 "dma_device_type": 1 00:11:18.155 }, 00:11:18.155 { 00:11:18.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.155 "dma_device_type": 2 00:11:18.155 } 00:11:18.155 ], 00:11:18.155 "driver_specific": {} 00:11:18.155 } 00:11:18.155 ] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.155 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.156 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.156 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.414 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.414 "name": "Existed_Raid", 00:11:18.414 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:18.414 "strip_size_kb": 64, 00:11:18.414 "state": "online", 00:11:18.414 "raid_level": "concat", 00:11:18.414 "superblock": true, 00:11:18.414 "num_base_bdevs": 4, 00:11:18.414 "num_base_bdevs_discovered": 4, 00:11:18.414 "num_base_bdevs_operational": 4, 00:11:18.414 "base_bdevs_list": [ 00:11:18.414 { 00:11:18.414 "name": "NewBaseBdev", 00:11:18.414 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:18.414 "is_configured": true, 00:11:18.414 "data_offset": 2048, 00:11:18.414 "data_size": 63488 00:11:18.414 }, 00:11:18.414 { 00:11:18.414 "name": "BaseBdev2", 00:11:18.414 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:18.414 "is_configured": true, 00:11:18.414 "data_offset": 2048, 00:11:18.414 "data_size": 63488 00:11:18.414 }, 00:11:18.414 { 00:11:18.414 "name": "BaseBdev3", 00:11:18.414 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:18.414 "is_configured": true, 00:11:18.414 "data_offset": 2048, 00:11:18.414 "data_size": 63488 00:11:18.414 }, 00:11:18.414 { 00:11:18.414 "name": "BaseBdev4", 00:11:18.414 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:18.414 "is_configured": true, 00:11:18.414 "data_offset": 2048, 00:11:18.414 "data_size": 63488 00:11:18.414 } 00:11:18.414 ] 00:11:18.414 }' 00:11:18.414 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.414 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.674 [2024-11-10 15:20:24.887834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.674 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.674 "name": "Existed_Raid", 00:11:18.674 "aliases": [ 00:11:18.674 "260fc33a-c299-4486-8b6e-40b69c5b5466" 00:11:18.674 ], 00:11:18.674 "product_name": "Raid Volume", 00:11:18.674 "block_size": 512, 00:11:18.674 "num_blocks": 253952, 00:11:18.674 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:18.674 "assigned_rate_limits": { 00:11:18.674 "rw_ios_per_sec": 0, 00:11:18.674 "rw_mbytes_per_sec": 0, 00:11:18.674 "r_mbytes_per_sec": 0, 00:11:18.674 "w_mbytes_per_sec": 0 00:11:18.674 }, 00:11:18.674 "claimed": false, 00:11:18.674 "zoned": false, 00:11:18.674 "supported_io_types": { 00:11:18.674 "read": true, 00:11:18.674 "write": true, 00:11:18.674 "unmap": true, 00:11:18.674 "flush": true, 00:11:18.674 "reset": true, 00:11:18.674 "nvme_admin": false, 00:11:18.674 "nvme_io": false, 00:11:18.674 "nvme_io_md": false, 00:11:18.674 "write_zeroes": true, 00:11:18.674 "zcopy": false, 00:11:18.674 "get_zone_info": false, 00:11:18.674 "zone_management": false, 00:11:18.674 "zone_append": false, 00:11:18.674 "compare": false, 00:11:18.674 "compare_and_write": false, 00:11:18.674 "abort": false, 00:11:18.674 "seek_hole": false, 00:11:18.674 "seek_data": false, 00:11:18.674 "copy": false, 00:11:18.674 "nvme_iov_md": false 00:11:18.674 }, 00:11:18.674 "memory_domains": [ 00:11:18.674 { 00:11:18.674 "dma_device_id": "system", 00:11:18.674 "dma_device_type": 1 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.674 "dma_device_type": 2 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "system", 00:11:18.674 "dma_device_type": 1 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.674 "dma_device_type": 2 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "system", 00:11:18.674 "dma_device_type": 1 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.674 "dma_device_type": 2 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "system", 00:11:18.674 "dma_device_type": 1 00:11:18.674 }, 00:11:18.674 { 00:11:18.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.674 "dma_device_type": 2 00:11:18.674 } 00:11:18.674 ], 00:11:18.674 "driver_specific": { 00:11:18.674 "raid": { 00:11:18.674 "uuid": "260fc33a-c299-4486-8b6e-40b69c5b5466", 00:11:18.674 "strip_size_kb": 64, 00:11:18.674 "state": "online", 00:11:18.674 "raid_level": "concat", 00:11:18.674 "superblock": true, 00:11:18.674 "num_base_bdevs": 4, 00:11:18.674 "num_base_bdevs_discovered": 4, 00:11:18.674 "num_base_bdevs_operational": 4, 00:11:18.674 "base_bdevs_list": [ 00:11:18.674 { 00:11:18.674 "name": "NewBaseBdev", 00:11:18.674 "uuid": "b6d0b2d3-6779-4e45-a9e7-1281159a1a64", 00:11:18.674 "is_configured": true, 00:11:18.674 "data_offset": 2048, 00:11:18.674 "data_size": 63488 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "name": "BaseBdev2", 00:11:18.675 "uuid": "c606eca3-8f32-41f5-a858-30473b73c7b3", 00:11:18.675 "is_configured": true, 00:11:18.675 "data_offset": 2048, 00:11:18.675 "data_size": 63488 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "name": "BaseBdev3", 00:11:18.675 "uuid": "80042467-f1b8-4722-920c-6f8fd26d5a20", 00:11:18.675 "is_configured": true, 00:11:18.675 "data_offset": 2048, 00:11:18.675 "data_size": 63488 00:11:18.675 }, 00:11:18.675 { 00:11:18.675 "name": "BaseBdev4", 00:11:18.675 "uuid": "60969c2b-fac1-4c74-9a78-ec4d983a7a89", 00:11:18.675 "is_configured": true, 00:11:18.675 "data_offset": 2048, 00:11:18.675 "data_size": 63488 00:11:18.675 } 00:11:18.675 ] 00:11:18.675 } 00:11:18.675 } 00:11:18.675 }' 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:18.675 BaseBdev2 00:11:18.675 BaseBdev3 00:11:18.675 BaseBdev4' 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 15:20:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.675 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.934 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.935 [2024-11-10 15:20:25.167593] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:18.935 [2024-11-10 15:20:25.167665] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:18.935 [2024-11-10 15:20:25.167786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:18.935 [2024-11-10 15:20:25.167873] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:18.935 [2024-11-10 15:20:25.167923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84205 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 84205 ']' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 84205 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84205 00:11:18.935 killing process with pid 84205 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84205' 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 84205 00:11:18.935 [2024-11-10 15:20:25.204428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.935 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 84205 00:11:18.935 [2024-11-10 15:20:25.245747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:19.195 15:20:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:19.195 00:11:19.195 real 0m9.184s 00:11:19.195 user 0m15.678s 00:11:19.195 sys 0m1.841s 00:11:19.195 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:19.195 15:20:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.195 ************************************ 00:11:19.195 END TEST raid_state_function_test_sb 00:11:19.195 ************************************ 00:11:19.195 15:20:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:19.195 15:20:25 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:19.195 15:20:25 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:19.195 15:20:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:19.195 ************************************ 00:11:19.195 START TEST raid_superblock_test 00:11:19.195 ************************************ 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test concat 4 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84848 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84848 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 84848 ']' 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:19.195 15:20:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.455 [2024-11-10 15:20:25.618655] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:19.455 [2024-11-10 15:20:25.618884] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84848 ] 00:11:19.455 [2024-11-10 15:20:25.751292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:19.455 [2024-11-10 15:20:25.791489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.714 [2024-11-10 15:20:25.817431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.714 [2024-11-10 15:20:25.862582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.714 [2024-11-10 15:20:25.862611] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.284 malloc1 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.284 [2024-11-10 15:20:26.470443] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:20.284 [2024-11-10 15:20:26.470515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.284 [2024-11-10 15:20:26.470537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:20.284 [2024-11-10 15:20:26.470555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.284 [2024-11-10 15:20:26.472729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.284 [2024-11-10 15:20:26.472805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:20.284 pt1 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.284 malloc2 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.284 [2024-11-10 15:20:26.498973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.284 [2024-11-10 15:20:26.499095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.284 [2024-11-10 15:20:26.499118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:20.284 [2024-11-10 15:20:26.499127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.284 [2024-11-10 15:20:26.501165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.284 [2024-11-10 15:20:26.501198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.284 pt2 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.284 malloc3 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.284 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.284 [2024-11-10 15:20:26.527556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:20.284 [2024-11-10 15:20:26.527645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.284 [2024-11-10 15:20:26.527681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:20.284 [2024-11-10 15:20:26.527709] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.284 [2024-11-10 15:20:26.529806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.285 [2024-11-10 15:20:26.529874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:20.285 pt3 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 malloc4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 [2024-11-10 15:20:26.574567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:20.285 [2024-11-10 15:20:26.574682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.285 [2024-11-10 15:20:26.574753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:20.285 [2024-11-10 15:20:26.574788] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.285 [2024-11-10 15:20:26.577021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.285 [2024-11-10 15:20:26.577095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:20.285 pt4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 [2024-11-10 15:20:26.586602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.285 [2024-11-10 15:20:26.588482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.285 [2024-11-10 15:20:26.588594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:20.285 [2024-11-10 15:20:26.588674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:20.285 [2024-11-10 15:20:26.588863] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:20.285 [2024-11-10 15:20:26.588907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:20.285 [2024-11-10 15:20:26.589190] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:20.285 [2024-11-10 15:20:26.589369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:20.285 [2024-11-10 15:20:26.589413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:20.285 [2024-11-10 15:20:26.589567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.285 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.545 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.545 "name": "raid_bdev1", 00:11:20.545 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:20.545 "strip_size_kb": 64, 00:11:20.545 "state": "online", 00:11:20.545 "raid_level": "concat", 00:11:20.545 "superblock": true, 00:11:20.545 "num_base_bdevs": 4, 00:11:20.545 "num_base_bdevs_discovered": 4, 00:11:20.545 "num_base_bdevs_operational": 4, 00:11:20.545 "base_bdevs_list": [ 00:11:20.545 { 00:11:20.545 "name": "pt1", 00:11:20.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.545 "is_configured": true, 00:11:20.545 "data_offset": 2048, 00:11:20.545 "data_size": 63488 00:11:20.545 }, 00:11:20.545 { 00:11:20.545 "name": "pt2", 00:11:20.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.545 "is_configured": true, 00:11:20.545 "data_offset": 2048, 00:11:20.545 "data_size": 63488 00:11:20.545 }, 00:11:20.545 { 00:11:20.545 "name": "pt3", 00:11:20.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.545 "is_configured": true, 00:11:20.545 "data_offset": 2048, 00:11:20.545 "data_size": 63488 00:11:20.545 }, 00:11:20.545 { 00:11:20.545 "name": "pt4", 00:11:20.545 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.545 "is_configured": true, 00:11:20.545 "data_offset": 2048, 00:11:20.545 "data_size": 63488 00:11:20.545 } 00:11:20.545 ] 00:11:20.545 }' 00:11:20.545 15:20:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.545 15:20:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.805 [2024-11-10 15:20:27.035043] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.805 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:20.805 "name": "raid_bdev1", 00:11:20.805 "aliases": [ 00:11:20.805 "a7a27242-072e-413f-a86b-106ce92dc699" 00:11:20.805 ], 00:11:20.805 "product_name": "Raid Volume", 00:11:20.805 "block_size": 512, 00:11:20.805 "num_blocks": 253952, 00:11:20.805 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:20.805 "assigned_rate_limits": { 00:11:20.805 "rw_ios_per_sec": 0, 00:11:20.805 "rw_mbytes_per_sec": 0, 00:11:20.805 "r_mbytes_per_sec": 0, 00:11:20.805 "w_mbytes_per_sec": 0 00:11:20.805 }, 00:11:20.805 "claimed": false, 00:11:20.805 "zoned": false, 00:11:20.805 "supported_io_types": { 00:11:20.805 "read": true, 00:11:20.805 "write": true, 00:11:20.805 "unmap": true, 00:11:20.805 "flush": true, 00:11:20.805 "reset": true, 00:11:20.805 "nvme_admin": false, 00:11:20.805 "nvme_io": false, 00:11:20.805 "nvme_io_md": false, 00:11:20.805 "write_zeroes": true, 00:11:20.805 "zcopy": false, 00:11:20.805 "get_zone_info": false, 00:11:20.805 "zone_management": false, 00:11:20.805 "zone_append": false, 00:11:20.805 "compare": false, 00:11:20.805 "compare_and_write": false, 00:11:20.805 "abort": false, 00:11:20.805 "seek_hole": false, 00:11:20.805 "seek_data": false, 00:11:20.805 "copy": false, 00:11:20.805 "nvme_iov_md": false 00:11:20.805 }, 00:11:20.805 "memory_domains": [ 00:11:20.805 { 00:11:20.805 "dma_device_id": "system", 00:11:20.805 "dma_device_type": 1 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.805 "dma_device_type": 2 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "system", 00:11:20.805 "dma_device_type": 1 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.805 "dma_device_type": 2 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "system", 00:11:20.805 "dma_device_type": 1 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.805 "dma_device_type": 2 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "system", 00:11:20.805 "dma_device_type": 1 00:11:20.805 }, 00:11:20.805 { 00:11:20.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.805 "dma_device_type": 2 00:11:20.805 } 00:11:20.805 ], 00:11:20.805 "driver_specific": { 00:11:20.805 "raid": { 00:11:20.805 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:20.805 "strip_size_kb": 64, 00:11:20.805 "state": "online", 00:11:20.805 "raid_level": "concat", 00:11:20.805 "superblock": true, 00:11:20.805 "num_base_bdevs": 4, 00:11:20.805 "num_base_bdevs_discovered": 4, 00:11:20.805 "num_base_bdevs_operational": 4, 00:11:20.805 "base_bdevs_list": [ 00:11:20.805 { 00:11:20.805 "name": "pt1", 00:11:20.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:20.805 "is_configured": true, 00:11:20.806 "data_offset": 2048, 00:11:20.806 "data_size": 63488 00:11:20.806 }, 00:11:20.806 { 00:11:20.806 "name": "pt2", 00:11:20.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:20.806 "is_configured": true, 00:11:20.806 "data_offset": 2048, 00:11:20.806 "data_size": 63488 00:11:20.806 }, 00:11:20.806 { 00:11:20.806 "name": "pt3", 00:11:20.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:20.806 "is_configured": true, 00:11:20.806 "data_offset": 2048, 00:11:20.806 "data_size": 63488 00:11:20.806 }, 00:11:20.806 { 00:11:20.806 "name": "pt4", 00:11:20.806 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:20.806 "is_configured": true, 00:11:20.806 "data_offset": 2048, 00:11:20.806 "data_size": 63488 00:11:20.806 } 00:11:20.806 ] 00:11:20.806 } 00:11:20.806 } 00:11:20.806 }' 00:11:20.806 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:20.806 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:20.806 pt2 00:11:20.806 pt3 00:11:20.806 pt4' 00:11:20.806 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 [2024-11-10 15:20:27.367077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a7a27242-072e-413f-a86b-106ce92dc699 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a7a27242-072e-413f-a86b-106ce92dc699 ']' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 [2024-11-10 15:20:27.398756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.066 [2024-11-10 15:20:27.398784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.066 [2024-11-10 15:20:27.398871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.066 [2024-11-10 15:20:27.398944] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.066 [2024-11-10 15:20:27.398957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.066 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:21.325 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 [2024-11-10 15:20:27.562849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:21.326 [2024-11-10 15:20:27.564787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:21.326 [2024-11-10 15:20:27.564876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:21.326 [2024-11-10 15:20:27.564927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:21.326 [2024-11-10 15:20:27.565014] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:21.326 [2024-11-10 15:20:27.565112] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:21.326 [2024-11-10 15:20:27.565169] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:21.326 [2024-11-10 15:20:27.565219] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:21.326 [2024-11-10 15:20:27.565258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.326 [2024-11-10 15:20:27.565272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:11:21.326 request: 00:11:21.326 { 00:11:21.326 "name": "raid_bdev1", 00:11:21.326 "raid_level": "concat", 00:11:21.326 "base_bdevs": [ 00:11:21.326 "malloc1", 00:11:21.326 "malloc2", 00:11:21.326 "malloc3", 00:11:21.326 "malloc4" 00:11:21.326 ], 00:11:21.326 "strip_size_kb": 64, 00:11:21.326 "superblock": false, 00:11:21.326 "method": "bdev_raid_create", 00:11:21.326 "req_id": 1 00:11:21.326 } 00:11:21.326 Got JSON-RPC error response 00:11:21.326 response: 00:11:21.326 { 00:11:21.326 "code": -17, 00:11:21.326 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:21.326 } 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 [2024-11-10 15:20:27.626827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:21.326 [2024-11-10 15:20:27.626920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.326 [2024-11-10 15:20:27.626939] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:21.326 [2024-11-10 15:20:27.626950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.326 [2024-11-10 15:20:27.629158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.326 [2024-11-10 15:20:27.629193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:21.326 [2024-11-10 15:20:27.629274] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:21.326 [2024-11-10 15:20:27.629329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:21.326 pt1 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.326 "name": "raid_bdev1", 00:11:21.326 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:21.326 "strip_size_kb": 64, 00:11:21.326 "state": "configuring", 00:11:21.326 "raid_level": "concat", 00:11:21.326 "superblock": true, 00:11:21.326 "num_base_bdevs": 4, 00:11:21.326 "num_base_bdevs_discovered": 1, 00:11:21.326 "num_base_bdevs_operational": 4, 00:11:21.326 "base_bdevs_list": [ 00:11:21.326 { 00:11:21.326 "name": "pt1", 00:11:21.326 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.326 "is_configured": true, 00:11:21.326 "data_offset": 2048, 00:11:21.326 "data_size": 63488 00:11:21.326 }, 00:11:21.326 { 00:11:21.326 "name": null, 00:11:21.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.326 "is_configured": false, 00:11:21.326 "data_offset": 2048, 00:11:21.326 "data_size": 63488 00:11:21.326 }, 00:11:21.326 { 00:11:21.326 "name": null, 00:11:21.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.326 "is_configured": false, 00:11:21.326 "data_offset": 2048, 00:11:21.326 "data_size": 63488 00:11:21.326 }, 00:11:21.326 { 00:11:21.326 "name": null, 00:11:21.326 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.326 "is_configured": false, 00:11:21.326 "data_offset": 2048, 00:11:21.326 "data_size": 63488 00:11:21.326 } 00:11:21.326 ] 00:11:21.326 }' 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.326 15:20:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.895 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.896 [2024-11-10 15:20:28.031053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:21.896 [2024-11-10 15:20:28.031229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.896 [2024-11-10 15:20:28.031278] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:21.896 [2024-11-10 15:20:28.031311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.896 [2024-11-10 15:20:28.031836] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.896 [2024-11-10 15:20:28.031903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:21.896 [2024-11-10 15:20:28.032034] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:21.896 [2024-11-10 15:20:28.032096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:21.896 pt2 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.896 [2024-11-10 15:20:28.042980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.896 "name": "raid_bdev1", 00:11:21.896 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:21.896 "strip_size_kb": 64, 00:11:21.896 "state": "configuring", 00:11:21.896 "raid_level": "concat", 00:11:21.896 "superblock": true, 00:11:21.896 "num_base_bdevs": 4, 00:11:21.896 "num_base_bdevs_discovered": 1, 00:11:21.896 "num_base_bdevs_operational": 4, 00:11:21.896 "base_bdevs_list": [ 00:11:21.896 { 00:11:21.896 "name": "pt1", 00:11:21.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:21.896 "is_configured": true, 00:11:21.896 "data_offset": 2048, 00:11:21.896 "data_size": 63488 00:11:21.896 }, 00:11:21.896 { 00:11:21.896 "name": null, 00:11:21.896 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:21.896 "is_configured": false, 00:11:21.896 "data_offset": 0, 00:11:21.896 "data_size": 63488 00:11:21.896 }, 00:11:21.896 { 00:11:21.896 "name": null, 00:11:21.896 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:21.896 "is_configured": false, 00:11:21.896 "data_offset": 2048, 00:11:21.896 "data_size": 63488 00:11:21.896 }, 00:11:21.896 { 00:11:21.896 "name": null, 00:11:21.896 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:21.896 "is_configured": false, 00:11:21.896 "data_offset": 2048, 00:11:21.896 "data_size": 63488 00:11:21.896 } 00:11:21.896 ] 00:11:21.896 }' 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.896 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.156 [2024-11-10 15:20:28.415147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:22.156 [2024-11-10 15:20:28.415326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.156 [2024-11-10 15:20:28.415356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:22.156 [2024-11-10 15:20:28.415366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.156 [2024-11-10 15:20:28.415877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.156 [2024-11-10 15:20:28.415909] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:22.156 [2024-11-10 15:20:28.416027] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:22.156 [2024-11-10 15:20:28.416055] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:22.156 pt2 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.156 [2024-11-10 15:20:28.427100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:22.156 [2024-11-10 15:20:28.427154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.156 [2024-11-10 15:20:28.427174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:22.156 [2024-11-10 15:20:28.427183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.156 [2024-11-10 15:20:28.427597] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.156 [2024-11-10 15:20:28.427624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:22.156 [2024-11-10 15:20:28.427695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:22.156 [2024-11-10 15:20:28.427715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:22.156 pt3 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.156 [2024-11-10 15:20:28.439128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:22.156 [2024-11-10 15:20:28.439196] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.156 [2024-11-10 15:20:28.439228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:22.156 [2024-11-10 15:20:28.439242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.156 [2024-11-10 15:20:28.439737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.156 [2024-11-10 15:20:28.439782] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:22.156 [2024-11-10 15:20:28.439882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:22.156 [2024-11-10 15:20:28.439911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:22.156 [2024-11-10 15:20:28.440093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:22.156 [2024-11-10 15:20:28.440117] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:22.156 [2024-11-10 15:20:28.440409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:22.156 [2024-11-10 15:20:28.440545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:22.156 [2024-11-10 15:20:28.440560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:22.156 [2024-11-10 15:20:28.440669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.156 pt4 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.156 "name": "raid_bdev1", 00:11:22.156 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:22.156 "strip_size_kb": 64, 00:11:22.156 "state": "online", 00:11:22.156 "raid_level": "concat", 00:11:22.156 "superblock": true, 00:11:22.156 "num_base_bdevs": 4, 00:11:22.156 "num_base_bdevs_discovered": 4, 00:11:22.156 "num_base_bdevs_operational": 4, 00:11:22.156 "base_bdevs_list": [ 00:11:22.156 { 00:11:22.156 "name": "pt1", 00:11:22.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.156 "is_configured": true, 00:11:22.156 "data_offset": 2048, 00:11:22.156 "data_size": 63488 00:11:22.156 }, 00:11:22.156 { 00:11:22.156 "name": "pt2", 00:11:22.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.156 "is_configured": true, 00:11:22.156 "data_offset": 2048, 00:11:22.156 "data_size": 63488 00:11:22.156 }, 00:11:22.156 { 00:11:22.156 "name": "pt3", 00:11:22.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.156 "is_configured": true, 00:11:22.156 "data_offset": 2048, 00:11:22.156 "data_size": 63488 00:11:22.156 }, 00:11:22.156 { 00:11:22.156 "name": "pt4", 00:11:22.156 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.156 "is_configured": true, 00:11:22.156 "data_offset": 2048, 00:11:22.156 "data_size": 63488 00:11:22.156 } 00:11:22.156 ] 00:11:22.156 }' 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.156 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.776 [2024-11-10 15:20:28.843623] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.776 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.776 "name": "raid_bdev1", 00:11:22.776 "aliases": [ 00:11:22.776 "a7a27242-072e-413f-a86b-106ce92dc699" 00:11:22.776 ], 00:11:22.776 "product_name": "Raid Volume", 00:11:22.776 "block_size": 512, 00:11:22.776 "num_blocks": 253952, 00:11:22.776 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:22.776 "assigned_rate_limits": { 00:11:22.776 "rw_ios_per_sec": 0, 00:11:22.776 "rw_mbytes_per_sec": 0, 00:11:22.776 "r_mbytes_per_sec": 0, 00:11:22.776 "w_mbytes_per_sec": 0 00:11:22.776 }, 00:11:22.776 "claimed": false, 00:11:22.776 "zoned": false, 00:11:22.776 "supported_io_types": { 00:11:22.776 "read": true, 00:11:22.776 "write": true, 00:11:22.776 "unmap": true, 00:11:22.776 "flush": true, 00:11:22.776 "reset": true, 00:11:22.776 "nvme_admin": false, 00:11:22.776 "nvme_io": false, 00:11:22.776 "nvme_io_md": false, 00:11:22.776 "write_zeroes": true, 00:11:22.776 "zcopy": false, 00:11:22.776 "get_zone_info": false, 00:11:22.776 "zone_management": false, 00:11:22.776 "zone_append": false, 00:11:22.776 "compare": false, 00:11:22.776 "compare_and_write": false, 00:11:22.776 "abort": false, 00:11:22.776 "seek_hole": false, 00:11:22.776 "seek_data": false, 00:11:22.776 "copy": false, 00:11:22.776 "nvme_iov_md": false 00:11:22.776 }, 00:11:22.776 "memory_domains": [ 00:11:22.776 { 00:11:22.776 "dma_device_id": "system", 00:11:22.776 "dma_device_type": 1 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.776 "dma_device_type": 2 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "system", 00:11:22.776 "dma_device_type": 1 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.776 "dma_device_type": 2 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "system", 00:11:22.776 "dma_device_type": 1 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.776 "dma_device_type": 2 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "system", 00:11:22.776 "dma_device_type": 1 00:11:22.776 }, 00:11:22.776 { 00:11:22.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.776 "dma_device_type": 2 00:11:22.776 } 00:11:22.776 ], 00:11:22.776 "driver_specific": { 00:11:22.776 "raid": { 00:11:22.776 "uuid": "a7a27242-072e-413f-a86b-106ce92dc699", 00:11:22.776 "strip_size_kb": 64, 00:11:22.776 "state": "online", 00:11:22.776 "raid_level": "concat", 00:11:22.776 "superblock": true, 00:11:22.776 "num_base_bdevs": 4, 00:11:22.776 "num_base_bdevs_discovered": 4, 00:11:22.776 "num_base_bdevs_operational": 4, 00:11:22.776 "base_bdevs_list": [ 00:11:22.776 { 00:11:22.776 "name": "pt1", 00:11:22.776 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:22.776 "is_configured": true, 00:11:22.776 "data_offset": 2048, 00:11:22.777 "data_size": 63488 00:11:22.777 }, 00:11:22.777 { 00:11:22.777 "name": "pt2", 00:11:22.777 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:22.777 "is_configured": true, 00:11:22.777 "data_offset": 2048, 00:11:22.777 "data_size": 63488 00:11:22.777 }, 00:11:22.777 { 00:11:22.777 "name": "pt3", 00:11:22.777 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:22.777 "is_configured": true, 00:11:22.777 "data_offset": 2048, 00:11:22.777 "data_size": 63488 00:11:22.777 }, 00:11:22.777 { 00:11:22.777 "name": "pt4", 00:11:22.777 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:22.777 "is_configured": true, 00:11:22.777 "data_offset": 2048, 00:11:22.777 "data_size": 63488 00:11:22.777 } 00:11:22.777 ] 00:11:22.777 } 00:11:22.777 } 00:11:22.777 }' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:22.777 pt2 00:11:22.777 pt3 00:11:22.777 pt4' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.777 15:20:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.777 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.062 [2024-11-10 15:20:29.115650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a7a27242-072e-413f-a86b-106ce92dc699 '!=' a7a27242-072e-413f-a86b-106ce92dc699 ']' 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84848 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 84848 ']' 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 84848 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84848 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84848' 00:11:23.062 killing process with pid 84848 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 84848 00:11:23.062 [2024-11-10 15:20:29.186044] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.062 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 84848 00:11:23.062 [2024-11-10 15:20:29.186280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.062 [2024-11-10 15:20:29.186387] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.062 [2024-11-10 15:20:29.186452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:23.062 [2024-11-10 15:20:29.268435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.322 15:20:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:23.322 00:11:23.322 real 0m4.061s 00:11:23.322 user 0m6.234s 00:11:23.322 sys 0m0.835s 00:11:23.322 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:23.322 15:20:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.322 ************************************ 00:11:23.322 END TEST raid_superblock_test 00:11:23.322 ************************************ 00:11:23.322 15:20:29 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:23.322 15:20:29 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:23.322 15:20:29 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.322 15:20:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.322 ************************************ 00:11:23.322 START TEST raid_read_error_test 00:11:23.322 ************************************ 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 read 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:23.322 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FllM5Y7CgT 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85103 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85103 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 85103 ']' 00:11:23.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.582 15:20:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.582 [2024-11-10 15:20:29.770573] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:23.582 [2024-11-10 15:20:29.770718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85103 ] 00:11:23.582 [2024-11-10 15:20:29.903440] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:23.582 [2024-11-10 15:20:29.937341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.841 [2024-11-10 15:20:29.979900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.841 [2024-11-10 15:20:30.058763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.841 [2024-11-10 15:20:30.058811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 BaseBdev1_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 true 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 [2024-11-10 15:20:30.627603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:24.411 [2024-11-10 15:20:30.627683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.411 [2024-11-10 15:20:30.627708] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:24.411 [2024-11-10 15:20:30.627722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.411 [2024-11-10 15:20:30.630145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.411 [2024-11-10 15:20:30.630181] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:24.411 BaseBdev1 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 BaseBdev2_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 true 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 [2024-11-10 15:20:30.674559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:24.411 [2024-11-10 15:20:30.674619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.411 [2024-11-10 15:20:30.674635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:24.411 [2024-11-10 15:20:30.674645] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.411 [2024-11-10 15:20:30.677077] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.411 [2024-11-10 15:20:30.677129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:24.411 BaseBdev2 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 BaseBdev3_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 true 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 [2024-11-10 15:20:30.721453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:24.411 [2024-11-10 15:20:30.721522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.411 [2024-11-10 15:20:30.721539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:24.411 [2024-11-10 15:20:30.721551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.411 [2024-11-10 15:20:30.723946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.411 [2024-11-10 15:20:30.723990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:24.411 BaseBdev3 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 BaseBdev4_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.411 true 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.411 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.671 [2024-11-10 15:20:30.776925] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:24.671 [2024-11-10 15:20:30.776995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.671 [2024-11-10 15:20:30.777026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.671 [2024-11-10 15:20:30.777038] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.671 [2024-11-10 15:20:30.779390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.671 [2024-11-10 15:20:30.779512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:24.671 BaseBdev4 00:11:24.671 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.671 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:24.671 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.671 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.671 [2024-11-10 15:20:30.789024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:24.671 [2024-11-10 15:20:30.791284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:24.671 [2024-11-10 15:20:30.791400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:24.671 [2024-11-10 15:20:30.791494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.671 [2024-11-10 15:20:30.791783] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:24.671 [2024-11-10 15:20:30.791839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.672 [2024-11-10 15:20:30.792159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:24.672 [2024-11-10 15:20:30.792405] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:24.672 [2024-11-10 15:20:30.792454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:24.672 [2024-11-10 15:20:30.792643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.672 "name": "raid_bdev1", 00:11:24.672 "uuid": "0016aa7e-7f3f-4b90-90be-24f4b781e317", 00:11:24.672 "strip_size_kb": 64, 00:11:24.672 "state": "online", 00:11:24.672 "raid_level": "concat", 00:11:24.672 "superblock": true, 00:11:24.672 "num_base_bdevs": 4, 00:11:24.672 "num_base_bdevs_discovered": 4, 00:11:24.672 "num_base_bdevs_operational": 4, 00:11:24.672 "base_bdevs_list": [ 00:11:24.672 { 00:11:24.672 "name": "BaseBdev1", 00:11:24.672 "uuid": "49f0d6d1-35e1-5ebf-bc92-580549c54288", 00:11:24.672 "is_configured": true, 00:11:24.672 "data_offset": 2048, 00:11:24.672 "data_size": 63488 00:11:24.672 }, 00:11:24.672 { 00:11:24.672 "name": "BaseBdev2", 00:11:24.672 "uuid": "8a7a1615-b9fe-5d1f-9ae1-10b26155f027", 00:11:24.672 "is_configured": true, 00:11:24.672 "data_offset": 2048, 00:11:24.672 "data_size": 63488 00:11:24.672 }, 00:11:24.672 { 00:11:24.672 "name": "BaseBdev3", 00:11:24.672 "uuid": "820991f3-d667-55d6-b5e4-1c273bfff92c", 00:11:24.672 "is_configured": true, 00:11:24.672 "data_offset": 2048, 00:11:24.672 "data_size": 63488 00:11:24.672 }, 00:11:24.672 { 00:11:24.672 "name": "BaseBdev4", 00:11:24.672 "uuid": "c5dba533-e0d7-5baa-bc79-52a813d9a773", 00:11:24.672 "is_configured": true, 00:11:24.672 "data_offset": 2048, 00:11:24.672 "data_size": 63488 00:11:24.672 } 00:11:24.672 ] 00:11:24.672 }' 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.672 15:20:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.931 15:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:24.931 15:20:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:24.931 [2024-11-10 15:20:31.269682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.871 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.131 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.131 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.131 "name": "raid_bdev1", 00:11:26.131 "uuid": "0016aa7e-7f3f-4b90-90be-24f4b781e317", 00:11:26.131 "strip_size_kb": 64, 00:11:26.131 "state": "online", 00:11:26.131 "raid_level": "concat", 00:11:26.131 "superblock": true, 00:11:26.131 "num_base_bdevs": 4, 00:11:26.131 "num_base_bdevs_discovered": 4, 00:11:26.131 "num_base_bdevs_operational": 4, 00:11:26.131 "base_bdevs_list": [ 00:11:26.131 { 00:11:26.131 "name": "BaseBdev1", 00:11:26.131 "uuid": "49f0d6d1-35e1-5ebf-bc92-580549c54288", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 }, 00:11:26.131 { 00:11:26.131 "name": "BaseBdev2", 00:11:26.131 "uuid": "8a7a1615-b9fe-5d1f-9ae1-10b26155f027", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 }, 00:11:26.131 { 00:11:26.131 "name": "BaseBdev3", 00:11:26.131 "uuid": "820991f3-d667-55d6-b5e4-1c273bfff92c", 00:11:26.131 "is_configured": true, 00:11:26.131 "data_offset": 2048, 00:11:26.131 "data_size": 63488 00:11:26.131 }, 00:11:26.131 { 00:11:26.131 "name": "BaseBdev4", 00:11:26.132 "uuid": "c5dba533-e0d7-5baa-bc79-52a813d9a773", 00:11:26.132 "is_configured": true, 00:11:26.132 "data_offset": 2048, 00:11:26.132 "data_size": 63488 00:11:26.132 } 00:11:26.132 ] 00:11:26.132 }' 00:11:26.132 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.132 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.392 [2024-11-10 15:20:32.681738] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.392 [2024-11-10 15:20:32.681872] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.392 [2024-11-10 15:20:32.684287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.392 [2024-11-10 15:20:32.684403] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.392 [2024-11-10 15:20:32.684473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.392 [2024-11-10 15:20:32.684530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:26.392 { 00:11:26.392 "results": [ 00:11:26.392 { 00:11:26.392 "job": "raid_bdev1", 00:11:26.392 "core_mask": "0x1", 00:11:26.392 "workload": "randrw", 00:11:26.392 "percentage": 50, 00:11:26.392 "status": "finished", 00:11:26.392 "queue_depth": 1, 00:11:26.392 "io_size": 131072, 00:11:26.392 "runtime": 1.40963, 00:11:26.392 "iops": 14334.967331853039, 00:11:26.392 "mibps": 1791.8709164816298, 00:11:26.392 "io_failed": 1, 00:11:26.392 "io_timeout": 0, 00:11:26.392 "avg_latency_us": 98.13639500195957, 00:11:26.392 "min_latency_us": 25.10241436415933, 00:11:26.392 "max_latency_us": 1356.646038525233 00:11:26.392 } 00:11:26.392 ], 00:11:26.392 "core_count": 1 00:11:26.392 } 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85103 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 85103 ']' 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 85103 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85103 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:26.392 killing process with pid 85103 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85103' 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 85103 00:11:26.392 [2024-11-10 15:20:32.729660] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.392 15:20:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 85103 00:11:26.652 [2024-11-10 15:20:32.798275] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FllM5Y7CgT 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:26.911 00:11:26.911 real 0m3.465s 00:11:26.911 user 0m4.232s 00:11:26.911 sys 0m0.598s 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.911 15:20:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.911 ************************************ 00:11:26.911 END TEST raid_read_error_test 00:11:26.911 ************************************ 00:11:26.911 15:20:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:26.911 15:20:33 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:26.911 15:20:33 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.911 15:20:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.911 ************************************ 00:11:26.911 START TEST raid_write_error_test 00:11:26.911 ************************************ 00:11:26.911 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test concat 4 write 00:11:26.911 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fL8EhHmNX0 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85238 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85238 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 85238 ']' 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:26.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:26.912 15:20:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.171 [2024-11-10 15:20:33.305233] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:27.171 [2024-11-10 15:20:33.305349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85238 ] 00:11:27.171 [2024-11-10 15:20:33.438292] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:27.171 [2024-11-10 15:20:33.469155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:27.171 [2024-11-10 15:20:33.508047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.429 [2024-11-10 15:20:33.584190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.429 [2024-11-10 15:20:33.584236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 BaseBdev1_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 true 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 [2024-11-10 15:20:34.167052] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:27.999 [2024-11-10 15:20:34.167236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.999 [2024-11-10 15:20:34.167271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:27.999 [2024-11-10 15:20:34.167288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.999 [2024-11-10 15:20:34.169800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.999 [2024-11-10 15:20:34.169840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:27.999 BaseBdev1 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 BaseBdev2_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 true 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 [2024-11-10 15:20:34.213969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:27.999 [2024-11-10 15:20:34.214047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.999 [2024-11-10 15:20:34.214067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:27.999 [2024-11-10 15:20:34.214078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.999 [2024-11-10 15:20:34.216466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.999 [2024-11-10 15:20:34.216505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:27.999 BaseBdev2 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 BaseBdev3_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 true 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.999 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.999 [2024-11-10 15:20:34.260688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:27.999 [2024-11-10 15:20:34.260820] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.999 [2024-11-10 15:20:34.260840] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:27.999 [2024-11-10 15:20:34.260852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.000 [2024-11-10 15:20:34.263142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.000 [2024-11-10 15:20:34.263178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:28.000 BaseBdev3 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.000 BaseBdev4_malloc 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.000 true 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.000 [2024-11-10 15:20:34.315442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:28.000 [2024-11-10 15:20:34.315580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.000 [2024-11-10 15:20:34.315603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:28.000 [2024-11-10 15:20:34.315614] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.000 [2024-11-10 15:20:34.317912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.000 [2024-11-10 15:20:34.317953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:28.000 BaseBdev4 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.000 [2024-11-10 15:20:34.327491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.000 [2024-11-10 15:20:34.329586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:28.000 [2024-11-10 15:20:34.329658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.000 [2024-11-10 15:20:34.329710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:28.000 [2024-11-10 15:20:34.329911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.000 [2024-11-10 15:20:34.329925] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:28.000 [2024-11-10 15:20:34.330181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:28.000 [2024-11-10 15:20:34.330324] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.000 [2024-11-10 15:20:34.330334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:28.000 [2024-11-10 15:20:34.330463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.000 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.260 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.260 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.260 "name": "raid_bdev1", 00:11:28.260 "uuid": "1fefc595-f6d5-48f1-acff-4875bfbbf6b4", 00:11:28.260 "strip_size_kb": 64, 00:11:28.260 "state": "online", 00:11:28.260 "raid_level": "concat", 00:11:28.260 "superblock": true, 00:11:28.260 "num_base_bdevs": 4, 00:11:28.260 "num_base_bdevs_discovered": 4, 00:11:28.260 "num_base_bdevs_operational": 4, 00:11:28.260 "base_bdevs_list": [ 00:11:28.260 { 00:11:28.260 "name": "BaseBdev1", 00:11:28.260 "uuid": "e6f6dde0-b65e-5f8f-aec2-0c20960b0038", 00:11:28.260 "is_configured": true, 00:11:28.260 "data_offset": 2048, 00:11:28.260 "data_size": 63488 00:11:28.260 }, 00:11:28.260 { 00:11:28.260 "name": "BaseBdev2", 00:11:28.260 "uuid": "75e3c5c2-789c-5782-8c99-31c7b6b8c603", 00:11:28.260 "is_configured": true, 00:11:28.260 "data_offset": 2048, 00:11:28.260 "data_size": 63488 00:11:28.260 }, 00:11:28.260 { 00:11:28.260 "name": "BaseBdev3", 00:11:28.260 "uuid": "0cf8d0c7-8858-5d64-b91f-607ee0e8146a", 00:11:28.260 "is_configured": true, 00:11:28.260 "data_offset": 2048, 00:11:28.260 "data_size": 63488 00:11:28.260 }, 00:11:28.260 { 00:11:28.260 "name": "BaseBdev4", 00:11:28.260 "uuid": "5cb7188d-67ab-5d06-bcb6-dcaefada00e7", 00:11:28.260 "is_configured": true, 00:11:28.260 "data_offset": 2048, 00:11:28.260 "data_size": 63488 00:11:28.260 } 00:11:28.260 ] 00:11:28.260 }' 00:11:28.260 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.260 15:20:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.520 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:28.520 15:20:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:28.520 [2024-11-10 15:20:34.876257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.459 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.719 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.719 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.719 "name": "raid_bdev1", 00:11:29.719 "uuid": "1fefc595-f6d5-48f1-acff-4875bfbbf6b4", 00:11:29.719 "strip_size_kb": 64, 00:11:29.719 "state": "online", 00:11:29.719 "raid_level": "concat", 00:11:29.719 "superblock": true, 00:11:29.719 "num_base_bdevs": 4, 00:11:29.719 "num_base_bdevs_discovered": 4, 00:11:29.719 "num_base_bdevs_operational": 4, 00:11:29.719 "base_bdevs_list": [ 00:11:29.719 { 00:11:29.719 "name": "BaseBdev1", 00:11:29.719 "uuid": "e6f6dde0-b65e-5f8f-aec2-0c20960b0038", 00:11:29.719 "is_configured": true, 00:11:29.719 "data_offset": 2048, 00:11:29.719 "data_size": 63488 00:11:29.719 }, 00:11:29.719 { 00:11:29.719 "name": "BaseBdev2", 00:11:29.719 "uuid": "75e3c5c2-789c-5782-8c99-31c7b6b8c603", 00:11:29.719 "is_configured": true, 00:11:29.719 "data_offset": 2048, 00:11:29.719 "data_size": 63488 00:11:29.719 }, 00:11:29.719 { 00:11:29.719 "name": "BaseBdev3", 00:11:29.719 "uuid": "0cf8d0c7-8858-5d64-b91f-607ee0e8146a", 00:11:29.719 "is_configured": true, 00:11:29.719 "data_offset": 2048, 00:11:29.719 "data_size": 63488 00:11:29.719 }, 00:11:29.719 { 00:11:29.719 "name": "BaseBdev4", 00:11:29.719 "uuid": "5cb7188d-67ab-5d06-bcb6-dcaefada00e7", 00:11:29.719 "is_configured": true, 00:11:29.719 "data_offset": 2048, 00:11:29.719 "data_size": 63488 00:11:29.719 } 00:11:29.719 ] 00:11:29.719 }' 00:11:29.719 15:20:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.719 15:20:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.980 [2024-11-10 15:20:36.248121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:29.980 [2024-11-10 15:20:36.248260] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.980 [2024-11-10 15:20:36.250815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.980 [2024-11-10 15:20:36.250922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.980 [2024-11-10 15:20:36.250990] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.980 [2024-11-10 15:20:36.251065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:29.980 { 00:11:29.980 "results": [ 00:11:29.980 { 00:11:29.980 "job": "raid_bdev1", 00:11:29.980 "core_mask": "0x1", 00:11:29.980 "workload": "randrw", 00:11:29.980 "percentage": 50, 00:11:29.980 "status": "finished", 00:11:29.980 "queue_depth": 1, 00:11:29.980 "io_size": 131072, 00:11:29.980 "runtime": 1.369569, 00:11:29.980 "iops": 13782.438124694703, 00:11:29.980 "mibps": 1722.804765586838, 00:11:29.980 "io_failed": 1, 00:11:29.980 "io_timeout": 0, 00:11:29.980 "avg_latency_us": 102.12798630000344, 00:11:29.980 "min_latency_us": 25.77181208053691, 00:11:29.980 "max_latency_us": 1420.908219297481 00:11:29.980 } 00:11:29.980 ], 00:11:29.980 "core_count": 1 00:11:29.980 } 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85238 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 85238 ']' 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 85238 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85238 00:11:29.980 killing process with pid 85238 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85238' 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 85238 00:11:29.980 [2024-11-10 15:20:36.287881] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.980 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 85238 00:11:30.246 [2024-11-10 15:20:36.353536] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fL8EhHmNX0 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.515 ************************************ 00:11:30.515 END TEST raid_write_error_test 00:11:30.515 ************************************ 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:30.515 00:11:30.515 real 0m3.487s 00:11:30.515 user 0m4.279s 00:11:30.515 sys 0m0.601s 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.515 15:20:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.515 15:20:36 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:30.515 15:20:36 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:30.515 15:20:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:30.515 15:20:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:30.515 15:20:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.515 ************************************ 00:11:30.515 START TEST raid_state_function_test 00:11:30.515 ************************************ 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 false 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.515 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=85370 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85370' 00:11:30.516 Process raid pid: 85370 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 85370 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 85370 ']' 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:30.516 15:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.516 [2024-11-10 15:20:36.852868] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:30.516 [2024-11-10 15:20:36.853074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.775 [2024-11-10 15:20:36.987625] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:30.775 [2024-11-10 15:20:37.027386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.775 [2024-11-10 15:20:37.075638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.035 [2024-11-10 15:20:37.152239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.035 [2024-11-10 15:20:37.152278] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.605 [2024-11-10 15:20:37.676762] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.605 [2024-11-10 15:20:37.676834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.605 [2024-11-10 15:20:37.676848] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.605 [2024-11-10 15:20:37.676855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.605 [2024-11-10 15:20:37.676866] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.605 [2024-11-10 15:20:37.676874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.605 [2024-11-10 15:20:37.676885] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.605 [2024-11-10 15:20:37.676891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.605 "name": "Existed_Raid", 00:11:31.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.605 "strip_size_kb": 0, 00:11:31.605 "state": "configuring", 00:11:31.605 "raid_level": "raid1", 00:11:31.605 "superblock": false, 00:11:31.605 "num_base_bdevs": 4, 00:11:31.605 "num_base_bdevs_discovered": 0, 00:11:31.605 "num_base_bdevs_operational": 4, 00:11:31.605 "base_bdevs_list": [ 00:11:31.605 { 00:11:31.605 "name": "BaseBdev1", 00:11:31.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.605 "is_configured": false, 00:11:31.605 "data_offset": 0, 00:11:31.605 "data_size": 0 00:11:31.605 }, 00:11:31.605 { 00:11:31.605 "name": "BaseBdev2", 00:11:31.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.605 "is_configured": false, 00:11:31.605 "data_offset": 0, 00:11:31.605 "data_size": 0 00:11:31.605 }, 00:11:31.605 { 00:11:31.605 "name": "BaseBdev3", 00:11:31.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.605 "is_configured": false, 00:11:31.605 "data_offset": 0, 00:11:31.605 "data_size": 0 00:11:31.605 }, 00:11:31.605 { 00:11:31.605 "name": "BaseBdev4", 00:11:31.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.605 "is_configured": false, 00:11:31.605 "data_offset": 0, 00:11:31.605 "data_size": 0 00:11:31.605 } 00:11:31.605 ] 00:11:31.605 }' 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.605 15:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.865 [2024-11-10 15:20:38.096788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:31.865 [2024-11-10 15:20:38.096846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.865 [2024-11-10 15:20:38.104802] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:31.865 [2024-11-10 15:20:38.104850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:31.865 [2024-11-10 15:20:38.104864] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:31.865 [2024-11-10 15:20:38.104872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:31.865 [2024-11-10 15:20:38.104881] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:31.865 [2024-11-10 15:20:38.104889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:31.865 [2024-11-10 15:20:38.104897] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:31.865 [2024-11-10 15:20:38.104905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.865 [2024-11-10 15:20:38.131712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.865 BaseBdev1 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.865 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.866 [ 00:11:31.866 { 00:11:31.866 "name": "BaseBdev1", 00:11:31.866 "aliases": [ 00:11:31.866 "a321d804-27b9-4427-b893-1908b369aa5d" 00:11:31.866 ], 00:11:31.866 "product_name": "Malloc disk", 00:11:31.866 "block_size": 512, 00:11:31.866 "num_blocks": 65536, 00:11:31.866 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:31.866 "assigned_rate_limits": { 00:11:31.866 "rw_ios_per_sec": 0, 00:11:31.866 "rw_mbytes_per_sec": 0, 00:11:31.866 "r_mbytes_per_sec": 0, 00:11:31.866 "w_mbytes_per_sec": 0 00:11:31.866 }, 00:11:31.866 "claimed": true, 00:11:31.866 "claim_type": "exclusive_write", 00:11:31.866 "zoned": false, 00:11:31.866 "supported_io_types": { 00:11:31.866 "read": true, 00:11:31.866 "write": true, 00:11:31.866 "unmap": true, 00:11:31.866 "flush": true, 00:11:31.866 "reset": true, 00:11:31.866 "nvme_admin": false, 00:11:31.866 "nvme_io": false, 00:11:31.866 "nvme_io_md": false, 00:11:31.866 "write_zeroes": true, 00:11:31.866 "zcopy": true, 00:11:31.866 "get_zone_info": false, 00:11:31.866 "zone_management": false, 00:11:31.866 "zone_append": false, 00:11:31.866 "compare": false, 00:11:31.866 "compare_and_write": false, 00:11:31.866 "abort": true, 00:11:31.866 "seek_hole": false, 00:11:31.866 "seek_data": false, 00:11:31.866 "copy": true, 00:11:31.866 "nvme_iov_md": false 00:11:31.866 }, 00:11:31.866 "memory_domains": [ 00:11:31.866 { 00:11:31.866 "dma_device_id": "system", 00:11:31.866 "dma_device_type": 1 00:11:31.866 }, 00:11:31.866 { 00:11:31.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.866 "dma_device_type": 2 00:11:31.866 } 00:11:31.866 ], 00:11:31.866 "driver_specific": {} 00:11:31.866 } 00:11:31.866 ] 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.866 "name": "Existed_Raid", 00:11:31.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.866 "strip_size_kb": 0, 00:11:31.866 "state": "configuring", 00:11:31.866 "raid_level": "raid1", 00:11:31.866 "superblock": false, 00:11:31.866 "num_base_bdevs": 4, 00:11:31.866 "num_base_bdevs_discovered": 1, 00:11:31.866 "num_base_bdevs_operational": 4, 00:11:31.866 "base_bdevs_list": [ 00:11:31.866 { 00:11:31.866 "name": "BaseBdev1", 00:11:31.866 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:31.866 "is_configured": true, 00:11:31.866 "data_offset": 0, 00:11:31.866 "data_size": 65536 00:11:31.866 }, 00:11:31.866 { 00:11:31.866 "name": "BaseBdev2", 00:11:31.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.866 "is_configured": false, 00:11:31.866 "data_offset": 0, 00:11:31.866 "data_size": 0 00:11:31.866 }, 00:11:31.866 { 00:11:31.866 "name": "BaseBdev3", 00:11:31.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.866 "is_configured": false, 00:11:31.866 "data_offset": 0, 00:11:31.866 "data_size": 0 00:11:31.866 }, 00:11:31.866 { 00:11:31.866 "name": "BaseBdev4", 00:11:31.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:31.866 "is_configured": false, 00:11:31.866 "data_offset": 0, 00:11:31.866 "data_size": 0 00:11:31.866 } 00:11:31.866 ] 00:11:31.866 }' 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.866 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.435 [2024-11-10 15:20:38.615941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:32.435 [2024-11-10 15:20:38.616041] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.435 [2024-11-10 15:20:38.627939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.435 [2024-11-10 15:20:38.630229] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:32.435 [2024-11-10 15:20:38.630267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:32.435 [2024-11-10 15:20:38.630278] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:32.435 [2024-11-10 15:20:38.630285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:32.435 [2024-11-10 15:20:38.630292] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:32.435 [2024-11-10 15:20:38.630299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.435 "name": "Existed_Raid", 00:11:32.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.435 "strip_size_kb": 0, 00:11:32.435 "state": "configuring", 00:11:32.435 "raid_level": "raid1", 00:11:32.435 "superblock": false, 00:11:32.435 "num_base_bdevs": 4, 00:11:32.435 "num_base_bdevs_discovered": 1, 00:11:32.435 "num_base_bdevs_operational": 4, 00:11:32.435 "base_bdevs_list": [ 00:11:32.435 { 00:11:32.435 "name": "BaseBdev1", 00:11:32.435 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:32.435 "is_configured": true, 00:11:32.435 "data_offset": 0, 00:11:32.435 "data_size": 65536 00:11:32.435 }, 00:11:32.435 { 00:11:32.435 "name": "BaseBdev2", 00:11:32.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.435 "is_configured": false, 00:11:32.435 "data_offset": 0, 00:11:32.435 "data_size": 0 00:11:32.435 }, 00:11:32.435 { 00:11:32.435 "name": "BaseBdev3", 00:11:32.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.435 "is_configured": false, 00:11:32.435 "data_offset": 0, 00:11:32.435 "data_size": 0 00:11:32.435 }, 00:11:32.435 { 00:11:32.435 "name": "BaseBdev4", 00:11:32.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.435 "is_configured": false, 00:11:32.435 "data_offset": 0, 00:11:32.435 "data_size": 0 00:11:32.435 } 00:11:32.435 ] 00:11:32.435 }' 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.435 15:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.695 [2024-11-10 15:20:39.044888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.695 BaseBdev2 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.695 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.955 [ 00:11:32.955 { 00:11:32.955 "name": "BaseBdev2", 00:11:32.955 "aliases": [ 00:11:32.955 "6ee9c68d-af4e-4126-b9d1-8598602a2b26" 00:11:32.955 ], 00:11:32.955 "product_name": "Malloc disk", 00:11:32.955 "block_size": 512, 00:11:32.955 "num_blocks": 65536, 00:11:32.955 "uuid": "6ee9c68d-af4e-4126-b9d1-8598602a2b26", 00:11:32.955 "assigned_rate_limits": { 00:11:32.955 "rw_ios_per_sec": 0, 00:11:32.955 "rw_mbytes_per_sec": 0, 00:11:32.955 "r_mbytes_per_sec": 0, 00:11:32.955 "w_mbytes_per_sec": 0 00:11:32.955 }, 00:11:32.955 "claimed": true, 00:11:32.955 "claim_type": "exclusive_write", 00:11:32.955 "zoned": false, 00:11:32.955 "supported_io_types": { 00:11:32.955 "read": true, 00:11:32.955 "write": true, 00:11:32.955 "unmap": true, 00:11:32.955 "flush": true, 00:11:32.955 "reset": true, 00:11:32.955 "nvme_admin": false, 00:11:32.955 "nvme_io": false, 00:11:32.955 "nvme_io_md": false, 00:11:32.955 "write_zeroes": true, 00:11:32.955 "zcopy": true, 00:11:32.955 "get_zone_info": false, 00:11:32.955 "zone_management": false, 00:11:32.955 "zone_append": false, 00:11:32.955 "compare": false, 00:11:32.955 "compare_and_write": false, 00:11:32.955 "abort": true, 00:11:32.955 "seek_hole": false, 00:11:32.955 "seek_data": false, 00:11:32.955 "copy": true, 00:11:32.955 "nvme_iov_md": false 00:11:32.955 }, 00:11:32.955 "memory_domains": [ 00:11:32.955 { 00:11:32.955 "dma_device_id": "system", 00:11:32.955 "dma_device_type": 1 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:32.955 "dma_device_type": 2 00:11:32.955 } 00:11:32.955 ], 00:11:32.955 "driver_specific": {} 00:11:32.955 } 00:11:32.955 ] 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.955 "name": "Existed_Raid", 00:11:32.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.955 "strip_size_kb": 0, 00:11:32.955 "state": "configuring", 00:11:32.955 "raid_level": "raid1", 00:11:32.955 "superblock": false, 00:11:32.955 "num_base_bdevs": 4, 00:11:32.955 "num_base_bdevs_discovered": 2, 00:11:32.955 "num_base_bdevs_operational": 4, 00:11:32.955 "base_bdevs_list": [ 00:11:32.955 { 00:11:32.955 "name": "BaseBdev1", 00:11:32.955 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:32.955 "is_configured": true, 00:11:32.955 "data_offset": 0, 00:11:32.955 "data_size": 65536 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "name": "BaseBdev2", 00:11:32.955 "uuid": "6ee9c68d-af4e-4126-b9d1-8598602a2b26", 00:11:32.955 "is_configured": true, 00:11:32.955 "data_offset": 0, 00:11:32.955 "data_size": 65536 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "name": "BaseBdev3", 00:11:32.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.955 "is_configured": false, 00:11:32.955 "data_offset": 0, 00:11:32.955 "data_size": 0 00:11:32.955 }, 00:11:32.955 { 00:11:32.955 "name": "BaseBdev4", 00:11:32.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.955 "is_configured": false, 00:11:32.955 "data_offset": 0, 00:11:32.955 "data_size": 0 00:11:32.955 } 00:11:32.955 ] 00:11:32.955 }' 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.955 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.215 [2024-11-10 15:20:39.510415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.215 BaseBdev3 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.215 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.215 [ 00:11:33.215 { 00:11:33.215 "name": "BaseBdev3", 00:11:33.215 "aliases": [ 00:11:33.215 "5eefb725-5726-441c-98fe-27e0e91417dd" 00:11:33.215 ], 00:11:33.215 "product_name": "Malloc disk", 00:11:33.215 "block_size": 512, 00:11:33.215 "num_blocks": 65536, 00:11:33.215 "uuid": "5eefb725-5726-441c-98fe-27e0e91417dd", 00:11:33.215 "assigned_rate_limits": { 00:11:33.216 "rw_ios_per_sec": 0, 00:11:33.216 "rw_mbytes_per_sec": 0, 00:11:33.216 "r_mbytes_per_sec": 0, 00:11:33.216 "w_mbytes_per_sec": 0 00:11:33.216 }, 00:11:33.216 "claimed": true, 00:11:33.216 "claim_type": "exclusive_write", 00:11:33.216 "zoned": false, 00:11:33.216 "supported_io_types": { 00:11:33.216 "read": true, 00:11:33.216 "write": true, 00:11:33.216 "unmap": true, 00:11:33.216 "flush": true, 00:11:33.216 "reset": true, 00:11:33.216 "nvme_admin": false, 00:11:33.216 "nvme_io": false, 00:11:33.216 "nvme_io_md": false, 00:11:33.216 "write_zeroes": true, 00:11:33.216 "zcopy": true, 00:11:33.216 "get_zone_info": false, 00:11:33.216 "zone_management": false, 00:11:33.216 "zone_append": false, 00:11:33.216 "compare": false, 00:11:33.216 "compare_and_write": false, 00:11:33.216 "abort": true, 00:11:33.216 "seek_hole": false, 00:11:33.216 "seek_data": false, 00:11:33.216 "copy": true, 00:11:33.216 "nvme_iov_md": false 00:11:33.216 }, 00:11:33.216 "memory_domains": [ 00:11:33.216 { 00:11:33.216 "dma_device_id": "system", 00:11:33.216 "dma_device_type": 1 00:11:33.216 }, 00:11:33.216 { 00:11:33.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.216 "dma_device_type": 2 00:11:33.216 } 00:11:33.216 ], 00:11:33.216 "driver_specific": {} 00:11:33.216 } 00:11:33.216 ] 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.216 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.477 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.477 "name": "Existed_Raid", 00:11:33.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.477 "strip_size_kb": 0, 00:11:33.477 "state": "configuring", 00:11:33.477 "raid_level": "raid1", 00:11:33.477 "superblock": false, 00:11:33.477 "num_base_bdevs": 4, 00:11:33.477 "num_base_bdevs_discovered": 3, 00:11:33.477 "num_base_bdevs_operational": 4, 00:11:33.477 "base_bdevs_list": [ 00:11:33.477 { 00:11:33.477 "name": "BaseBdev1", 00:11:33.477 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:33.477 "is_configured": true, 00:11:33.477 "data_offset": 0, 00:11:33.477 "data_size": 65536 00:11:33.477 }, 00:11:33.477 { 00:11:33.477 "name": "BaseBdev2", 00:11:33.477 "uuid": "6ee9c68d-af4e-4126-b9d1-8598602a2b26", 00:11:33.477 "is_configured": true, 00:11:33.477 "data_offset": 0, 00:11:33.477 "data_size": 65536 00:11:33.477 }, 00:11:33.477 { 00:11:33.477 "name": "BaseBdev3", 00:11:33.477 "uuid": "5eefb725-5726-441c-98fe-27e0e91417dd", 00:11:33.477 "is_configured": true, 00:11:33.477 "data_offset": 0, 00:11:33.477 "data_size": 65536 00:11:33.477 }, 00:11:33.477 { 00:11:33.477 "name": "BaseBdev4", 00:11:33.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.477 "is_configured": false, 00:11:33.477 "data_offset": 0, 00:11:33.477 "data_size": 0 00:11:33.477 } 00:11:33.477 ] 00:11:33.477 }' 00:11:33.477 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.477 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 [2024-11-10 15:20:39.991581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.738 BaseBdev4 00:11:33.738 [2024-11-10 15:20:39.991721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:33.738 [2024-11-10 15:20:39.991750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:33.738 [2024-11-10 15:20:39.992090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:33.738 [2024-11-10 15:20:39.992256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:33.738 [2024-11-10 15:20:39.992268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:33.738 [2024-11-10 15:20:39.992522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.738 15:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 [ 00:11:33.738 { 00:11:33.738 "name": "BaseBdev4", 00:11:33.738 "aliases": [ 00:11:33.738 "7ef9d06a-9978-4301-b28c-f2fd83b9e4ab" 00:11:33.738 ], 00:11:33.738 "product_name": "Malloc disk", 00:11:33.738 "block_size": 512, 00:11:33.738 "num_blocks": 65536, 00:11:33.738 "uuid": "7ef9d06a-9978-4301-b28c-f2fd83b9e4ab", 00:11:33.738 "assigned_rate_limits": { 00:11:33.738 "rw_ios_per_sec": 0, 00:11:33.738 "rw_mbytes_per_sec": 0, 00:11:33.738 "r_mbytes_per_sec": 0, 00:11:33.738 "w_mbytes_per_sec": 0 00:11:33.738 }, 00:11:33.738 "claimed": true, 00:11:33.738 "claim_type": "exclusive_write", 00:11:33.738 "zoned": false, 00:11:33.738 "supported_io_types": { 00:11:33.738 "read": true, 00:11:33.738 "write": true, 00:11:33.738 "unmap": true, 00:11:33.738 "flush": true, 00:11:33.738 "reset": true, 00:11:33.738 "nvme_admin": false, 00:11:33.738 "nvme_io": false, 00:11:33.738 "nvme_io_md": false, 00:11:33.738 "write_zeroes": true, 00:11:33.738 "zcopy": true, 00:11:33.738 "get_zone_info": false, 00:11:33.738 "zone_management": false, 00:11:33.738 "zone_append": false, 00:11:33.738 "compare": false, 00:11:33.738 "compare_and_write": false, 00:11:33.738 "abort": true, 00:11:33.738 "seek_hole": false, 00:11:33.738 "seek_data": false, 00:11:33.738 "copy": true, 00:11:33.738 "nvme_iov_md": false 00:11:33.738 }, 00:11:33.738 "memory_domains": [ 00:11:33.738 { 00:11:33.738 "dma_device_id": "system", 00:11:33.738 "dma_device_type": 1 00:11:33.738 }, 00:11:33.738 { 00:11:33.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:33.738 "dma_device_type": 2 00:11:33.738 } 00:11:33.738 ], 00:11:33.738 "driver_specific": {} 00:11:33.738 } 00:11:33.738 ] 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.738 "name": "Existed_Raid", 00:11:33.738 "uuid": "378aa8e6-81fc-4c27-96fd-44400d8fb5c4", 00:11:33.738 "strip_size_kb": 0, 00:11:33.738 "state": "online", 00:11:33.738 "raid_level": "raid1", 00:11:33.738 "superblock": false, 00:11:33.738 "num_base_bdevs": 4, 00:11:33.738 "num_base_bdevs_discovered": 4, 00:11:33.738 "num_base_bdevs_operational": 4, 00:11:33.738 "base_bdevs_list": [ 00:11:33.738 { 00:11:33.738 "name": "BaseBdev1", 00:11:33.738 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:33.738 "is_configured": true, 00:11:33.738 "data_offset": 0, 00:11:33.738 "data_size": 65536 00:11:33.738 }, 00:11:33.738 { 00:11:33.738 "name": "BaseBdev2", 00:11:33.738 "uuid": "6ee9c68d-af4e-4126-b9d1-8598602a2b26", 00:11:33.738 "is_configured": true, 00:11:33.738 "data_offset": 0, 00:11:33.738 "data_size": 65536 00:11:33.738 }, 00:11:33.738 { 00:11:33.738 "name": "BaseBdev3", 00:11:33.738 "uuid": "5eefb725-5726-441c-98fe-27e0e91417dd", 00:11:33.738 "is_configured": true, 00:11:33.738 "data_offset": 0, 00:11:33.738 "data_size": 65536 00:11:33.738 }, 00:11:33.738 { 00:11:33.738 "name": "BaseBdev4", 00:11:33.738 "uuid": "7ef9d06a-9978-4301-b28c-f2fd83b9e4ab", 00:11:33.738 "is_configured": true, 00:11:33.738 "data_offset": 0, 00:11:33.738 "data_size": 65536 00:11:33.738 } 00:11:33.738 ] 00:11:33.738 }' 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.738 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.308 [2024-11-10 15:20:40.432152] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.308 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.308 "name": "Existed_Raid", 00:11:34.308 "aliases": [ 00:11:34.308 "378aa8e6-81fc-4c27-96fd-44400d8fb5c4" 00:11:34.308 ], 00:11:34.308 "product_name": "Raid Volume", 00:11:34.308 "block_size": 512, 00:11:34.308 "num_blocks": 65536, 00:11:34.308 "uuid": "378aa8e6-81fc-4c27-96fd-44400d8fb5c4", 00:11:34.308 "assigned_rate_limits": { 00:11:34.308 "rw_ios_per_sec": 0, 00:11:34.308 "rw_mbytes_per_sec": 0, 00:11:34.308 "r_mbytes_per_sec": 0, 00:11:34.308 "w_mbytes_per_sec": 0 00:11:34.308 }, 00:11:34.308 "claimed": false, 00:11:34.308 "zoned": false, 00:11:34.308 "supported_io_types": { 00:11:34.308 "read": true, 00:11:34.308 "write": true, 00:11:34.308 "unmap": false, 00:11:34.309 "flush": false, 00:11:34.309 "reset": true, 00:11:34.309 "nvme_admin": false, 00:11:34.309 "nvme_io": false, 00:11:34.309 "nvme_io_md": false, 00:11:34.309 "write_zeroes": true, 00:11:34.309 "zcopy": false, 00:11:34.309 "get_zone_info": false, 00:11:34.309 "zone_management": false, 00:11:34.309 "zone_append": false, 00:11:34.309 "compare": false, 00:11:34.309 "compare_and_write": false, 00:11:34.309 "abort": false, 00:11:34.309 "seek_hole": false, 00:11:34.309 "seek_data": false, 00:11:34.309 "copy": false, 00:11:34.309 "nvme_iov_md": false 00:11:34.309 }, 00:11:34.309 "memory_domains": [ 00:11:34.309 { 00:11:34.309 "dma_device_id": "system", 00:11:34.309 "dma_device_type": 1 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.309 "dma_device_type": 2 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "system", 00:11:34.309 "dma_device_type": 1 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.309 "dma_device_type": 2 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "system", 00:11:34.309 "dma_device_type": 1 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.309 "dma_device_type": 2 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "system", 00:11:34.309 "dma_device_type": 1 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.309 "dma_device_type": 2 00:11:34.309 } 00:11:34.309 ], 00:11:34.309 "driver_specific": { 00:11:34.309 "raid": { 00:11:34.309 "uuid": "378aa8e6-81fc-4c27-96fd-44400d8fb5c4", 00:11:34.309 "strip_size_kb": 0, 00:11:34.309 "state": "online", 00:11:34.309 "raid_level": "raid1", 00:11:34.309 "superblock": false, 00:11:34.309 "num_base_bdevs": 4, 00:11:34.309 "num_base_bdevs_discovered": 4, 00:11:34.309 "num_base_bdevs_operational": 4, 00:11:34.309 "base_bdevs_list": [ 00:11:34.309 { 00:11:34.309 "name": "BaseBdev1", 00:11:34.309 "uuid": "a321d804-27b9-4427-b893-1908b369aa5d", 00:11:34.309 "is_configured": true, 00:11:34.309 "data_offset": 0, 00:11:34.309 "data_size": 65536 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "name": "BaseBdev2", 00:11:34.309 "uuid": "6ee9c68d-af4e-4126-b9d1-8598602a2b26", 00:11:34.309 "is_configured": true, 00:11:34.309 "data_offset": 0, 00:11:34.309 "data_size": 65536 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "name": "BaseBdev3", 00:11:34.309 "uuid": "5eefb725-5726-441c-98fe-27e0e91417dd", 00:11:34.309 "is_configured": true, 00:11:34.309 "data_offset": 0, 00:11:34.309 "data_size": 65536 00:11:34.309 }, 00:11:34.309 { 00:11:34.309 "name": "BaseBdev4", 00:11:34.309 "uuid": "7ef9d06a-9978-4301-b28c-f2fd83b9e4ab", 00:11:34.309 "is_configured": true, 00:11:34.309 "data_offset": 0, 00:11:34.309 "data_size": 65536 00:11:34.309 } 00:11:34.309 ] 00:11:34.309 } 00:11:34.309 } 00:11:34.309 }' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:34.309 BaseBdev2 00:11:34.309 BaseBdev3 00:11:34.309 BaseBdev4' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.309 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.570 [2024-11-10 15:20:40.751968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.570 "name": "Existed_Raid", 00:11:34.570 "uuid": "378aa8e6-81fc-4c27-96fd-44400d8fb5c4", 00:11:34.570 "strip_size_kb": 0, 00:11:34.570 "state": "online", 00:11:34.570 "raid_level": "raid1", 00:11:34.570 "superblock": false, 00:11:34.570 "num_base_bdevs": 4, 00:11:34.570 "num_base_bdevs_discovered": 3, 00:11:34.570 "num_base_bdevs_operational": 3, 00:11:34.570 "base_bdevs_list": [ 00:11:34.570 { 00:11:34.570 "name": null, 00:11:34.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.570 "is_configured": false, 00:11:34.570 "data_offset": 0, 00:11:34.570 "data_size": 65536 00:11:34.570 }, 00:11:34.570 { 00:11:34.570 "name": "BaseBdev2", 00:11:34.570 "uuid": "6ee9c68d-af4e-4126-b9d1-8598602a2b26", 00:11:34.570 "is_configured": true, 00:11:34.570 "data_offset": 0, 00:11:34.570 "data_size": 65536 00:11:34.570 }, 00:11:34.570 { 00:11:34.570 "name": "BaseBdev3", 00:11:34.570 "uuid": "5eefb725-5726-441c-98fe-27e0e91417dd", 00:11:34.570 "is_configured": true, 00:11:34.570 "data_offset": 0, 00:11:34.570 "data_size": 65536 00:11:34.570 }, 00:11:34.570 { 00:11:34.570 "name": "BaseBdev4", 00:11:34.570 "uuid": "7ef9d06a-9978-4301-b28c-f2fd83b9e4ab", 00:11:34.570 "is_configured": true, 00:11:34.570 "data_offset": 0, 00:11:34.570 "data_size": 65536 00:11:34.570 } 00:11:34.570 ] 00:11:34.570 }' 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.570 15:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.829 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:34.830 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 [2024-11-10 15:20:41.244881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 [2024-11-10 15:20:41.325474] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 [2024-11-10 15:20:41.406057] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:35.089 [2024-11-10 15:20:41.406177] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.089 [2024-11-10 15:20:41.426306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.089 [2024-11-10 15:20:41.426363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.089 [2024-11-10 15:20:41.426379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.089 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.349 BaseBdev2 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:35.349 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 [ 00:11:35.350 { 00:11:35.350 "name": "BaseBdev2", 00:11:35.350 "aliases": [ 00:11:35.350 "ae775147-8b49-47b7-afdd-45e955b56815" 00:11:35.350 ], 00:11:35.350 "product_name": "Malloc disk", 00:11:35.350 "block_size": 512, 00:11:35.350 "num_blocks": 65536, 00:11:35.350 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:35.350 "assigned_rate_limits": { 00:11:35.350 "rw_ios_per_sec": 0, 00:11:35.350 "rw_mbytes_per_sec": 0, 00:11:35.350 "r_mbytes_per_sec": 0, 00:11:35.350 "w_mbytes_per_sec": 0 00:11:35.350 }, 00:11:35.350 "claimed": false, 00:11:35.350 "zoned": false, 00:11:35.350 "supported_io_types": { 00:11:35.350 "read": true, 00:11:35.350 "write": true, 00:11:35.350 "unmap": true, 00:11:35.350 "flush": true, 00:11:35.350 "reset": true, 00:11:35.350 "nvme_admin": false, 00:11:35.350 "nvme_io": false, 00:11:35.350 "nvme_io_md": false, 00:11:35.350 "write_zeroes": true, 00:11:35.350 "zcopy": true, 00:11:35.350 "get_zone_info": false, 00:11:35.350 "zone_management": false, 00:11:35.350 "zone_append": false, 00:11:35.350 "compare": false, 00:11:35.350 "compare_and_write": false, 00:11:35.350 "abort": true, 00:11:35.350 "seek_hole": false, 00:11:35.350 "seek_data": false, 00:11:35.350 "copy": true, 00:11:35.350 "nvme_iov_md": false 00:11:35.350 }, 00:11:35.350 "memory_domains": [ 00:11:35.350 { 00:11:35.350 "dma_device_id": "system", 00:11:35.350 "dma_device_type": 1 00:11:35.350 }, 00:11:35.350 { 00:11:35.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.350 "dma_device_type": 2 00:11:35.350 } 00:11:35.350 ], 00:11:35.350 "driver_specific": {} 00:11:35.350 } 00:11:35.350 ] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 BaseBdev3 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 [ 00:11:35.350 { 00:11:35.350 "name": "BaseBdev3", 00:11:35.350 "aliases": [ 00:11:35.350 "ef4f10a7-23f1-472d-867f-4b8c646bf234" 00:11:35.350 ], 00:11:35.350 "product_name": "Malloc disk", 00:11:35.350 "block_size": 512, 00:11:35.350 "num_blocks": 65536, 00:11:35.350 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:35.350 "assigned_rate_limits": { 00:11:35.350 "rw_ios_per_sec": 0, 00:11:35.350 "rw_mbytes_per_sec": 0, 00:11:35.350 "r_mbytes_per_sec": 0, 00:11:35.350 "w_mbytes_per_sec": 0 00:11:35.350 }, 00:11:35.350 "claimed": false, 00:11:35.350 "zoned": false, 00:11:35.350 "supported_io_types": { 00:11:35.350 "read": true, 00:11:35.350 "write": true, 00:11:35.350 "unmap": true, 00:11:35.350 "flush": true, 00:11:35.350 "reset": true, 00:11:35.350 "nvme_admin": false, 00:11:35.350 "nvme_io": false, 00:11:35.350 "nvme_io_md": false, 00:11:35.350 "write_zeroes": true, 00:11:35.350 "zcopy": true, 00:11:35.350 "get_zone_info": false, 00:11:35.350 "zone_management": false, 00:11:35.350 "zone_append": false, 00:11:35.350 "compare": false, 00:11:35.350 "compare_and_write": false, 00:11:35.350 "abort": true, 00:11:35.350 "seek_hole": false, 00:11:35.350 "seek_data": false, 00:11:35.350 "copy": true, 00:11:35.350 "nvme_iov_md": false 00:11:35.350 }, 00:11:35.350 "memory_domains": [ 00:11:35.350 { 00:11:35.350 "dma_device_id": "system", 00:11:35.350 "dma_device_type": 1 00:11:35.350 }, 00:11:35.350 { 00:11:35.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.350 "dma_device_type": 2 00:11:35.350 } 00:11:35.350 ], 00:11:35.350 "driver_specific": {} 00:11:35.350 } 00:11:35.350 ] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 BaseBdev4 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.350 [ 00:11:35.350 { 00:11:35.350 "name": "BaseBdev4", 00:11:35.350 "aliases": [ 00:11:35.350 "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd" 00:11:35.350 ], 00:11:35.350 "product_name": "Malloc disk", 00:11:35.350 "block_size": 512, 00:11:35.350 "num_blocks": 65536, 00:11:35.350 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:35.350 "assigned_rate_limits": { 00:11:35.350 "rw_ios_per_sec": 0, 00:11:35.350 "rw_mbytes_per_sec": 0, 00:11:35.350 "r_mbytes_per_sec": 0, 00:11:35.350 "w_mbytes_per_sec": 0 00:11:35.350 }, 00:11:35.350 "claimed": false, 00:11:35.350 "zoned": false, 00:11:35.350 "supported_io_types": { 00:11:35.350 "read": true, 00:11:35.350 "write": true, 00:11:35.350 "unmap": true, 00:11:35.350 "flush": true, 00:11:35.350 "reset": true, 00:11:35.350 "nvme_admin": false, 00:11:35.350 "nvme_io": false, 00:11:35.350 "nvme_io_md": false, 00:11:35.350 "write_zeroes": true, 00:11:35.350 "zcopy": true, 00:11:35.350 "get_zone_info": false, 00:11:35.350 "zone_management": false, 00:11:35.350 "zone_append": false, 00:11:35.350 "compare": false, 00:11:35.350 "compare_and_write": false, 00:11:35.350 "abort": true, 00:11:35.350 "seek_hole": false, 00:11:35.350 "seek_data": false, 00:11:35.350 "copy": true, 00:11:35.350 "nvme_iov_md": false 00:11:35.350 }, 00:11:35.350 "memory_domains": [ 00:11:35.350 { 00:11:35.350 "dma_device_id": "system", 00:11:35.350 "dma_device_type": 1 00:11:35.350 }, 00:11:35.350 { 00:11:35.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.350 "dma_device_type": 2 00:11:35.350 } 00:11:35.350 ], 00:11:35.350 "driver_specific": {} 00:11:35.350 } 00:11:35.350 ] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.350 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.351 [2024-11-10 15:20:41.659448] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.351 [2024-11-10 15:20:41.659607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.351 [2024-11-10 15:20:41.659649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.351 [2024-11-10 15:20:41.661878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.351 [2024-11-10 15:20:41.661934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.351 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.611 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.611 "name": "Existed_Raid", 00:11:35.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.611 "strip_size_kb": 0, 00:11:35.611 "state": "configuring", 00:11:35.611 "raid_level": "raid1", 00:11:35.611 "superblock": false, 00:11:35.611 "num_base_bdevs": 4, 00:11:35.611 "num_base_bdevs_discovered": 3, 00:11:35.611 "num_base_bdevs_operational": 4, 00:11:35.611 "base_bdevs_list": [ 00:11:35.611 { 00:11:35.611 "name": "BaseBdev1", 00:11:35.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.611 "is_configured": false, 00:11:35.611 "data_offset": 0, 00:11:35.611 "data_size": 0 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "BaseBdev2", 00:11:35.611 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:35.611 "is_configured": true, 00:11:35.611 "data_offset": 0, 00:11:35.611 "data_size": 65536 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "BaseBdev3", 00:11:35.611 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:35.611 "is_configured": true, 00:11:35.611 "data_offset": 0, 00:11:35.611 "data_size": 65536 00:11:35.611 }, 00:11:35.611 { 00:11:35.611 "name": "BaseBdev4", 00:11:35.611 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:35.611 "is_configured": true, 00:11:35.611 "data_offset": 0, 00:11:35.611 "data_size": 65536 00:11:35.611 } 00:11:35.611 ] 00:11:35.611 }' 00:11:35.611 15:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.611 15:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.871 [2024-11-10 15:20:42.091521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.871 "name": "Existed_Raid", 00:11:35.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.871 "strip_size_kb": 0, 00:11:35.871 "state": "configuring", 00:11:35.871 "raid_level": "raid1", 00:11:35.871 "superblock": false, 00:11:35.871 "num_base_bdevs": 4, 00:11:35.871 "num_base_bdevs_discovered": 2, 00:11:35.871 "num_base_bdevs_operational": 4, 00:11:35.871 "base_bdevs_list": [ 00:11:35.871 { 00:11:35.871 "name": "BaseBdev1", 00:11:35.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.871 "is_configured": false, 00:11:35.871 "data_offset": 0, 00:11:35.871 "data_size": 0 00:11:35.871 }, 00:11:35.871 { 00:11:35.871 "name": null, 00:11:35.871 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:35.871 "is_configured": false, 00:11:35.871 "data_offset": 0, 00:11:35.871 "data_size": 65536 00:11:35.871 }, 00:11:35.871 { 00:11:35.871 "name": "BaseBdev3", 00:11:35.871 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:35.871 "is_configured": true, 00:11:35.871 "data_offset": 0, 00:11:35.871 "data_size": 65536 00:11:35.871 }, 00:11:35.871 { 00:11:35.871 "name": "BaseBdev4", 00:11:35.871 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:35.871 "is_configured": true, 00:11:35.871 "data_offset": 0, 00:11:35.871 "data_size": 65536 00:11:35.871 } 00:11:35.871 ] 00:11:35.871 }' 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.871 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.440 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:36.440 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.440 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.441 [2024-11-10 15:20:42.572400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.441 BaseBdev1 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.441 [ 00:11:36.441 { 00:11:36.441 "name": "BaseBdev1", 00:11:36.441 "aliases": [ 00:11:36.441 "47c85c85-49d7-4515-b35f-9c7351a18a0f" 00:11:36.441 ], 00:11:36.441 "product_name": "Malloc disk", 00:11:36.441 "block_size": 512, 00:11:36.441 "num_blocks": 65536, 00:11:36.441 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:36.441 "assigned_rate_limits": { 00:11:36.441 "rw_ios_per_sec": 0, 00:11:36.441 "rw_mbytes_per_sec": 0, 00:11:36.441 "r_mbytes_per_sec": 0, 00:11:36.441 "w_mbytes_per_sec": 0 00:11:36.441 }, 00:11:36.441 "claimed": true, 00:11:36.441 "claim_type": "exclusive_write", 00:11:36.441 "zoned": false, 00:11:36.441 "supported_io_types": { 00:11:36.441 "read": true, 00:11:36.441 "write": true, 00:11:36.441 "unmap": true, 00:11:36.441 "flush": true, 00:11:36.441 "reset": true, 00:11:36.441 "nvme_admin": false, 00:11:36.441 "nvme_io": false, 00:11:36.441 "nvme_io_md": false, 00:11:36.441 "write_zeroes": true, 00:11:36.441 "zcopy": true, 00:11:36.441 "get_zone_info": false, 00:11:36.441 "zone_management": false, 00:11:36.441 "zone_append": false, 00:11:36.441 "compare": false, 00:11:36.441 "compare_and_write": false, 00:11:36.441 "abort": true, 00:11:36.441 "seek_hole": false, 00:11:36.441 "seek_data": false, 00:11:36.441 "copy": true, 00:11:36.441 "nvme_iov_md": false 00:11:36.441 }, 00:11:36.441 "memory_domains": [ 00:11:36.441 { 00:11:36.441 "dma_device_id": "system", 00:11:36.441 "dma_device_type": 1 00:11:36.441 }, 00:11:36.441 { 00:11:36.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.441 "dma_device_type": 2 00:11:36.441 } 00:11:36.441 ], 00:11:36.441 "driver_specific": {} 00:11:36.441 } 00:11:36.441 ] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.441 "name": "Existed_Raid", 00:11:36.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.441 "strip_size_kb": 0, 00:11:36.441 "state": "configuring", 00:11:36.441 "raid_level": "raid1", 00:11:36.441 "superblock": false, 00:11:36.441 "num_base_bdevs": 4, 00:11:36.441 "num_base_bdevs_discovered": 3, 00:11:36.441 "num_base_bdevs_operational": 4, 00:11:36.441 "base_bdevs_list": [ 00:11:36.441 { 00:11:36.441 "name": "BaseBdev1", 00:11:36.441 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:36.441 "is_configured": true, 00:11:36.441 "data_offset": 0, 00:11:36.441 "data_size": 65536 00:11:36.441 }, 00:11:36.441 { 00:11:36.441 "name": null, 00:11:36.441 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:36.441 "is_configured": false, 00:11:36.441 "data_offset": 0, 00:11:36.441 "data_size": 65536 00:11:36.441 }, 00:11:36.441 { 00:11:36.441 "name": "BaseBdev3", 00:11:36.441 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:36.441 "is_configured": true, 00:11:36.441 "data_offset": 0, 00:11:36.441 "data_size": 65536 00:11:36.441 }, 00:11:36.441 { 00:11:36.441 "name": "BaseBdev4", 00:11:36.441 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:36.441 "is_configured": true, 00:11:36.441 "data_offset": 0, 00:11:36.441 "data_size": 65536 00:11:36.441 } 00:11:36.441 ] 00:11:36.441 }' 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.441 15:20:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.701 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.701 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.701 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.701 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:36.701 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.960 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:36.960 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.961 [2024-11-10 15:20:43.076648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.961 "name": "Existed_Raid", 00:11:36.961 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.961 "strip_size_kb": 0, 00:11:36.961 "state": "configuring", 00:11:36.961 "raid_level": "raid1", 00:11:36.961 "superblock": false, 00:11:36.961 "num_base_bdevs": 4, 00:11:36.961 "num_base_bdevs_discovered": 2, 00:11:36.961 "num_base_bdevs_operational": 4, 00:11:36.961 "base_bdevs_list": [ 00:11:36.961 { 00:11:36.961 "name": "BaseBdev1", 00:11:36.961 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:36.961 "is_configured": true, 00:11:36.961 "data_offset": 0, 00:11:36.961 "data_size": 65536 00:11:36.961 }, 00:11:36.961 { 00:11:36.961 "name": null, 00:11:36.961 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:36.961 "is_configured": false, 00:11:36.961 "data_offset": 0, 00:11:36.961 "data_size": 65536 00:11:36.961 }, 00:11:36.961 { 00:11:36.961 "name": null, 00:11:36.961 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:36.961 "is_configured": false, 00:11:36.961 "data_offset": 0, 00:11:36.961 "data_size": 65536 00:11:36.961 }, 00:11:36.961 { 00:11:36.961 "name": "BaseBdev4", 00:11:36.961 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:36.961 "is_configured": true, 00:11:36.961 "data_offset": 0, 00:11:36.961 "data_size": 65536 00:11:36.961 } 00:11:36.961 ] 00:11:36.961 }' 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.961 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.220 [2024-11-10 15:20:43.512821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.220 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.220 "name": "Existed_Raid", 00:11:37.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.220 "strip_size_kb": 0, 00:11:37.220 "state": "configuring", 00:11:37.220 "raid_level": "raid1", 00:11:37.220 "superblock": false, 00:11:37.220 "num_base_bdevs": 4, 00:11:37.220 "num_base_bdevs_discovered": 3, 00:11:37.220 "num_base_bdevs_operational": 4, 00:11:37.220 "base_bdevs_list": [ 00:11:37.220 { 00:11:37.220 "name": "BaseBdev1", 00:11:37.220 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:37.220 "is_configured": true, 00:11:37.220 "data_offset": 0, 00:11:37.220 "data_size": 65536 00:11:37.220 }, 00:11:37.220 { 00:11:37.220 "name": null, 00:11:37.220 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:37.220 "is_configured": false, 00:11:37.220 "data_offset": 0, 00:11:37.220 "data_size": 65536 00:11:37.220 }, 00:11:37.220 { 00:11:37.220 "name": "BaseBdev3", 00:11:37.220 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:37.220 "is_configured": true, 00:11:37.220 "data_offset": 0, 00:11:37.220 "data_size": 65536 00:11:37.220 }, 00:11:37.220 { 00:11:37.221 "name": "BaseBdev4", 00:11:37.221 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:37.221 "is_configured": true, 00:11:37.221 "data_offset": 0, 00:11:37.221 "data_size": 65536 00:11:37.221 } 00:11:37.221 ] 00:11:37.221 }' 00:11:37.221 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.221 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.805 [2024-11-10 15:20:43.976990] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.805 15:20:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.805 "name": "Existed_Raid", 00:11:37.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.805 "strip_size_kb": 0, 00:11:37.805 "state": "configuring", 00:11:37.805 "raid_level": "raid1", 00:11:37.805 "superblock": false, 00:11:37.805 "num_base_bdevs": 4, 00:11:37.805 "num_base_bdevs_discovered": 2, 00:11:37.805 "num_base_bdevs_operational": 4, 00:11:37.805 "base_bdevs_list": [ 00:11:37.805 { 00:11:37.805 "name": null, 00:11:37.805 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:37.805 "is_configured": false, 00:11:37.805 "data_offset": 0, 00:11:37.805 "data_size": 65536 00:11:37.805 }, 00:11:37.805 { 00:11:37.805 "name": null, 00:11:37.805 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:37.805 "is_configured": false, 00:11:37.805 "data_offset": 0, 00:11:37.805 "data_size": 65536 00:11:37.805 }, 00:11:37.805 { 00:11:37.805 "name": "BaseBdev3", 00:11:37.805 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:37.805 "is_configured": true, 00:11:37.805 "data_offset": 0, 00:11:37.805 "data_size": 65536 00:11:37.805 }, 00:11:37.805 { 00:11:37.805 "name": "BaseBdev4", 00:11:37.805 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:37.805 "is_configured": true, 00:11:37.805 "data_offset": 0, 00:11:37.805 "data_size": 65536 00:11:37.805 } 00:11:37.805 ] 00:11:37.805 }' 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.805 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.064 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.064 [2024-11-10 15:20:44.424770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.324 "name": "Existed_Raid", 00:11:38.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.324 "strip_size_kb": 0, 00:11:38.324 "state": "configuring", 00:11:38.324 "raid_level": "raid1", 00:11:38.324 "superblock": false, 00:11:38.324 "num_base_bdevs": 4, 00:11:38.324 "num_base_bdevs_discovered": 3, 00:11:38.324 "num_base_bdevs_operational": 4, 00:11:38.324 "base_bdevs_list": [ 00:11:38.324 { 00:11:38.324 "name": null, 00:11:38.324 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:38.324 "is_configured": false, 00:11:38.324 "data_offset": 0, 00:11:38.324 "data_size": 65536 00:11:38.324 }, 00:11:38.324 { 00:11:38.324 "name": "BaseBdev2", 00:11:38.324 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:38.324 "is_configured": true, 00:11:38.324 "data_offset": 0, 00:11:38.324 "data_size": 65536 00:11:38.324 }, 00:11:38.324 { 00:11:38.324 "name": "BaseBdev3", 00:11:38.324 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:38.324 "is_configured": true, 00:11:38.324 "data_offset": 0, 00:11:38.324 "data_size": 65536 00:11:38.324 }, 00:11:38.324 { 00:11:38.324 "name": "BaseBdev4", 00:11:38.324 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:38.324 "is_configured": true, 00:11:38.324 "data_offset": 0, 00:11:38.324 "data_size": 65536 00:11:38.324 } 00:11:38.324 ] 00:11:38.324 }' 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.324 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 47c85c85-49d7-4515-b35f-9c7351a18a0f 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 [2024-11-10 15:20:44.925842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:38.583 [2024-11-10 15:20:44.925985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:38.583 [2024-11-10 15:20:44.925997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:38.583 [2024-11-10 15:20:44.926301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:38.583 [2024-11-10 15:20:44.926456] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:38.583 [2024-11-10 15:20:44.926469] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:38.583 [2024-11-10 15:20:44.926692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.583 NewBaseBdev 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local i 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.583 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.842 [ 00:11:38.842 { 00:11:38.842 "name": "NewBaseBdev", 00:11:38.842 "aliases": [ 00:11:38.842 "47c85c85-49d7-4515-b35f-9c7351a18a0f" 00:11:38.842 ], 00:11:38.842 "product_name": "Malloc disk", 00:11:38.842 "block_size": 512, 00:11:38.842 "num_blocks": 65536, 00:11:38.842 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:38.842 "assigned_rate_limits": { 00:11:38.842 "rw_ios_per_sec": 0, 00:11:38.842 "rw_mbytes_per_sec": 0, 00:11:38.842 "r_mbytes_per_sec": 0, 00:11:38.842 "w_mbytes_per_sec": 0 00:11:38.842 }, 00:11:38.842 "claimed": true, 00:11:38.842 "claim_type": "exclusive_write", 00:11:38.842 "zoned": false, 00:11:38.843 "supported_io_types": { 00:11:38.843 "read": true, 00:11:38.843 "write": true, 00:11:38.843 "unmap": true, 00:11:38.843 "flush": true, 00:11:38.843 "reset": true, 00:11:38.843 "nvme_admin": false, 00:11:38.843 "nvme_io": false, 00:11:38.843 "nvme_io_md": false, 00:11:38.843 "write_zeroes": true, 00:11:38.843 "zcopy": true, 00:11:38.843 "get_zone_info": false, 00:11:38.843 "zone_management": false, 00:11:38.843 "zone_append": false, 00:11:38.843 "compare": false, 00:11:38.843 "compare_and_write": false, 00:11:38.843 "abort": true, 00:11:38.843 "seek_hole": false, 00:11:38.843 "seek_data": false, 00:11:38.843 "copy": true, 00:11:38.843 "nvme_iov_md": false 00:11:38.843 }, 00:11:38.843 "memory_domains": [ 00:11:38.843 { 00:11:38.843 "dma_device_id": "system", 00:11:38.843 "dma_device_type": 1 00:11:38.843 }, 00:11:38.843 { 00:11:38.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.843 "dma_device_type": 2 00:11:38.843 } 00:11:38.843 ], 00:11:38.843 "driver_specific": {} 00:11:38.843 } 00:11:38.843 ] 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.843 15:20:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.843 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.843 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.843 "name": "Existed_Raid", 00:11:38.843 "uuid": "3d427a6e-6714-4dc6-9de3-cf09ff125c76", 00:11:38.843 "strip_size_kb": 0, 00:11:38.843 "state": "online", 00:11:38.843 "raid_level": "raid1", 00:11:38.843 "superblock": false, 00:11:38.843 "num_base_bdevs": 4, 00:11:38.843 "num_base_bdevs_discovered": 4, 00:11:38.843 "num_base_bdevs_operational": 4, 00:11:38.843 "base_bdevs_list": [ 00:11:38.843 { 00:11:38.843 "name": "NewBaseBdev", 00:11:38.843 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:38.843 "is_configured": true, 00:11:38.843 "data_offset": 0, 00:11:38.843 "data_size": 65536 00:11:38.843 }, 00:11:38.843 { 00:11:38.843 "name": "BaseBdev2", 00:11:38.843 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:38.843 "is_configured": true, 00:11:38.843 "data_offset": 0, 00:11:38.843 "data_size": 65536 00:11:38.843 }, 00:11:38.843 { 00:11:38.843 "name": "BaseBdev3", 00:11:38.843 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:38.843 "is_configured": true, 00:11:38.843 "data_offset": 0, 00:11:38.843 "data_size": 65536 00:11:38.843 }, 00:11:38.843 { 00:11:38.843 "name": "BaseBdev4", 00:11:38.843 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:38.843 "is_configured": true, 00:11:38.843 "data_offset": 0, 00:11:38.843 "data_size": 65536 00:11:38.843 } 00:11:38.843 ] 00:11:38.843 }' 00:11:38.843 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.843 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:39.102 [2024-11-10 15:20:45.342364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.102 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:39.102 "name": "Existed_Raid", 00:11:39.102 "aliases": [ 00:11:39.102 "3d427a6e-6714-4dc6-9de3-cf09ff125c76" 00:11:39.102 ], 00:11:39.102 "product_name": "Raid Volume", 00:11:39.102 "block_size": 512, 00:11:39.102 "num_blocks": 65536, 00:11:39.102 "uuid": "3d427a6e-6714-4dc6-9de3-cf09ff125c76", 00:11:39.102 "assigned_rate_limits": { 00:11:39.102 "rw_ios_per_sec": 0, 00:11:39.102 "rw_mbytes_per_sec": 0, 00:11:39.103 "r_mbytes_per_sec": 0, 00:11:39.103 "w_mbytes_per_sec": 0 00:11:39.103 }, 00:11:39.103 "claimed": false, 00:11:39.103 "zoned": false, 00:11:39.103 "supported_io_types": { 00:11:39.103 "read": true, 00:11:39.103 "write": true, 00:11:39.103 "unmap": false, 00:11:39.103 "flush": false, 00:11:39.103 "reset": true, 00:11:39.103 "nvme_admin": false, 00:11:39.103 "nvme_io": false, 00:11:39.103 "nvme_io_md": false, 00:11:39.103 "write_zeroes": true, 00:11:39.103 "zcopy": false, 00:11:39.103 "get_zone_info": false, 00:11:39.103 "zone_management": false, 00:11:39.103 "zone_append": false, 00:11:39.103 "compare": false, 00:11:39.103 "compare_and_write": false, 00:11:39.103 "abort": false, 00:11:39.103 "seek_hole": false, 00:11:39.103 "seek_data": false, 00:11:39.103 "copy": false, 00:11:39.103 "nvme_iov_md": false 00:11:39.103 }, 00:11:39.103 "memory_domains": [ 00:11:39.103 { 00:11:39.103 "dma_device_id": "system", 00:11:39.103 "dma_device_type": 1 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.103 "dma_device_type": 2 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "system", 00:11:39.103 "dma_device_type": 1 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.103 "dma_device_type": 2 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "system", 00:11:39.103 "dma_device_type": 1 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.103 "dma_device_type": 2 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "system", 00:11:39.103 "dma_device_type": 1 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.103 "dma_device_type": 2 00:11:39.103 } 00:11:39.103 ], 00:11:39.103 "driver_specific": { 00:11:39.103 "raid": { 00:11:39.103 "uuid": "3d427a6e-6714-4dc6-9de3-cf09ff125c76", 00:11:39.103 "strip_size_kb": 0, 00:11:39.103 "state": "online", 00:11:39.103 "raid_level": "raid1", 00:11:39.103 "superblock": false, 00:11:39.103 "num_base_bdevs": 4, 00:11:39.103 "num_base_bdevs_discovered": 4, 00:11:39.103 "num_base_bdevs_operational": 4, 00:11:39.103 "base_bdevs_list": [ 00:11:39.103 { 00:11:39.103 "name": "NewBaseBdev", 00:11:39.103 "uuid": "47c85c85-49d7-4515-b35f-9c7351a18a0f", 00:11:39.103 "is_configured": true, 00:11:39.103 "data_offset": 0, 00:11:39.103 "data_size": 65536 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "name": "BaseBdev2", 00:11:39.103 "uuid": "ae775147-8b49-47b7-afdd-45e955b56815", 00:11:39.103 "is_configured": true, 00:11:39.103 "data_offset": 0, 00:11:39.103 "data_size": 65536 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "name": "BaseBdev3", 00:11:39.103 "uuid": "ef4f10a7-23f1-472d-867f-4b8c646bf234", 00:11:39.103 "is_configured": true, 00:11:39.103 "data_offset": 0, 00:11:39.103 "data_size": 65536 00:11:39.103 }, 00:11:39.103 { 00:11:39.103 "name": "BaseBdev4", 00:11:39.103 "uuid": "b3ca7033-0d91-4754-9ffb-ae06ef2b12fd", 00:11:39.103 "is_configured": true, 00:11:39.103 "data_offset": 0, 00:11:39.103 "data_size": 65536 00:11:39.103 } 00:11:39.103 ] 00:11:39.103 } 00:11:39.103 } 00:11:39.103 }' 00:11:39.103 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:39.103 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:39.103 BaseBdev2 00:11:39.103 BaseBdev3 00:11:39.103 BaseBdev4' 00:11:39.103 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.362 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:39.362 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.362 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:39.362 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.362 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.363 [2024-11-10 15:20:45.658126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:39.363 [2024-11-10 15:20:45.658255] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.363 [2024-11-10 15:20:45.658380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.363 [2024-11-10 15:20:45.658707] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.363 [2024-11-10 15:20:45.658762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 85370 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 85370 ']' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # kill -0 85370 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # uname 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85370 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85370' 00:11:39.363 killing process with pid 85370 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@971 -- # kill 85370 00:11:39.363 [2024-11-10 15:20:45.707261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.363 15:20:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@976 -- # wait 85370 00:11:39.622 [2024-11-10 15:20:45.784770] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:39.882 00:11:39.882 real 0m9.351s 00:11:39.882 user 0m15.641s 00:11:39.882 sys 0m2.019s 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.882 ************************************ 00:11:39.882 END TEST raid_state_function_test 00:11:39.882 ************************************ 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.882 15:20:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:39.882 15:20:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:39.882 15:20:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.882 15:20:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:39.882 ************************************ 00:11:39.882 START TEST raid_state_function_test_sb 00:11:39.882 ************************************ 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 4 true 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:39.882 Process raid pid: 86014 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=86014 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86014' 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 86014 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 86014 ']' 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:39.882 15:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.142 [2024-11-10 15:20:46.274513] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:40.142 [2024-11-10 15:20:46.274734] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.142 [2024-11-10 15:20:46.409114] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:40.142 [2024-11-10 15:20:46.446390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.142 [2024-11-10 15:20:46.488242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.401 [2024-11-10 15:20:46.567553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.401 [2024-11-10 15:20:46.567697] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.968 [2024-11-10 15:20:47.112589] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:40.968 [2024-11-10 15:20:47.112762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:40.968 [2024-11-10 15:20:47.112786] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:40.968 [2024-11-10 15:20:47.112794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:40.968 [2024-11-10 15:20:47.112806] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:40.968 [2024-11-10 15:20:47.112812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:40.968 [2024-11-10 15:20:47.112823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:40.968 [2024-11-10 15:20:47.112829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:40.968 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.969 "name": "Existed_Raid", 00:11:40.969 "uuid": "9bf4cd86-aaee-44c5-b744-1ec382eda758", 00:11:40.969 "strip_size_kb": 0, 00:11:40.969 "state": "configuring", 00:11:40.969 "raid_level": "raid1", 00:11:40.969 "superblock": true, 00:11:40.969 "num_base_bdevs": 4, 00:11:40.969 "num_base_bdevs_discovered": 0, 00:11:40.969 "num_base_bdevs_operational": 4, 00:11:40.969 "base_bdevs_list": [ 00:11:40.969 { 00:11:40.969 "name": "BaseBdev1", 00:11:40.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.969 "is_configured": false, 00:11:40.969 "data_offset": 0, 00:11:40.969 "data_size": 0 00:11:40.969 }, 00:11:40.969 { 00:11:40.969 "name": "BaseBdev2", 00:11:40.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.969 "is_configured": false, 00:11:40.969 "data_offset": 0, 00:11:40.969 "data_size": 0 00:11:40.969 }, 00:11:40.969 { 00:11:40.969 "name": "BaseBdev3", 00:11:40.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.969 "is_configured": false, 00:11:40.969 "data_offset": 0, 00:11:40.969 "data_size": 0 00:11:40.969 }, 00:11:40.969 { 00:11:40.969 "name": "BaseBdev4", 00:11:40.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.969 "is_configured": false, 00:11:40.969 "data_offset": 0, 00:11:40.969 "data_size": 0 00:11:40.969 } 00:11:40.969 ] 00:11:40.969 }' 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.969 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.228 [2024-11-10 15:20:47.572680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.228 [2024-11-10 15:20:47.572818] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.228 [2024-11-10 15:20:47.580678] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.228 [2024-11-10 15:20:47.580772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.228 [2024-11-10 15:20:47.580813] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.228 [2024-11-10 15:20:47.580845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.228 [2024-11-10 15:20:47.580878] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.228 [2024-11-10 15:20:47.580920] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.228 [2024-11-10 15:20:47.580952] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.228 [2024-11-10 15:20:47.580978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.228 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.487 [2024-11-10 15:20:47.604313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.487 BaseBdev1 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.487 [ 00:11:41.487 { 00:11:41.487 "name": "BaseBdev1", 00:11:41.487 "aliases": [ 00:11:41.487 "dc3f1827-bfd1-4992-8c27-c2718210e022" 00:11:41.487 ], 00:11:41.487 "product_name": "Malloc disk", 00:11:41.487 "block_size": 512, 00:11:41.487 "num_blocks": 65536, 00:11:41.487 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:41.487 "assigned_rate_limits": { 00:11:41.487 "rw_ios_per_sec": 0, 00:11:41.487 "rw_mbytes_per_sec": 0, 00:11:41.487 "r_mbytes_per_sec": 0, 00:11:41.487 "w_mbytes_per_sec": 0 00:11:41.487 }, 00:11:41.487 "claimed": true, 00:11:41.487 "claim_type": "exclusive_write", 00:11:41.487 "zoned": false, 00:11:41.487 "supported_io_types": { 00:11:41.487 "read": true, 00:11:41.487 "write": true, 00:11:41.487 "unmap": true, 00:11:41.487 "flush": true, 00:11:41.487 "reset": true, 00:11:41.487 "nvme_admin": false, 00:11:41.487 "nvme_io": false, 00:11:41.487 "nvme_io_md": false, 00:11:41.487 "write_zeroes": true, 00:11:41.487 "zcopy": true, 00:11:41.487 "get_zone_info": false, 00:11:41.487 "zone_management": false, 00:11:41.487 "zone_append": false, 00:11:41.487 "compare": false, 00:11:41.487 "compare_and_write": false, 00:11:41.487 "abort": true, 00:11:41.487 "seek_hole": false, 00:11:41.487 "seek_data": false, 00:11:41.487 "copy": true, 00:11:41.487 "nvme_iov_md": false 00:11:41.487 }, 00:11:41.487 "memory_domains": [ 00:11:41.487 { 00:11:41.487 "dma_device_id": "system", 00:11:41.487 "dma_device_type": 1 00:11:41.487 }, 00:11:41.487 { 00:11:41.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.487 "dma_device_type": 2 00:11:41.487 } 00:11:41.487 ], 00:11:41.487 "driver_specific": {} 00:11:41.487 } 00:11:41.487 ] 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.487 "name": "Existed_Raid", 00:11:41.487 "uuid": "eb9e979f-6faa-4f96-9794-020682721eb6", 00:11:41.487 "strip_size_kb": 0, 00:11:41.487 "state": "configuring", 00:11:41.487 "raid_level": "raid1", 00:11:41.487 "superblock": true, 00:11:41.487 "num_base_bdevs": 4, 00:11:41.487 "num_base_bdevs_discovered": 1, 00:11:41.487 "num_base_bdevs_operational": 4, 00:11:41.487 "base_bdevs_list": [ 00:11:41.487 { 00:11:41.487 "name": "BaseBdev1", 00:11:41.487 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:41.487 "is_configured": true, 00:11:41.487 "data_offset": 2048, 00:11:41.487 "data_size": 63488 00:11:41.487 }, 00:11:41.487 { 00:11:41.487 "name": "BaseBdev2", 00:11:41.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.487 "is_configured": false, 00:11:41.487 "data_offset": 0, 00:11:41.487 "data_size": 0 00:11:41.487 }, 00:11:41.487 { 00:11:41.487 "name": "BaseBdev3", 00:11:41.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.487 "is_configured": false, 00:11:41.487 "data_offset": 0, 00:11:41.487 "data_size": 0 00:11:41.487 }, 00:11:41.487 { 00:11:41.487 "name": "BaseBdev4", 00:11:41.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.487 "is_configured": false, 00:11:41.487 "data_offset": 0, 00:11:41.487 "data_size": 0 00:11:41.487 } 00:11:41.487 ] 00:11:41.487 }' 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.487 15:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.747 [2024-11-10 15:20:48.032532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:41.747 [2024-11-10 15:20:48.032753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.747 [2024-11-10 15:20:48.044567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.747 [2024-11-10 15:20:48.047073] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.747 [2024-11-10 15:20:48.047175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.747 [2024-11-10 15:20:48.047193] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.747 [2024-11-10 15:20:48.047203] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.747 [2024-11-10 15:20:48.047213] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.747 [2024-11-10 15:20:48.047221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.747 "name": "Existed_Raid", 00:11:41.747 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:41.747 "strip_size_kb": 0, 00:11:41.747 "state": "configuring", 00:11:41.747 "raid_level": "raid1", 00:11:41.747 "superblock": true, 00:11:41.747 "num_base_bdevs": 4, 00:11:41.747 "num_base_bdevs_discovered": 1, 00:11:41.747 "num_base_bdevs_operational": 4, 00:11:41.747 "base_bdevs_list": [ 00:11:41.747 { 00:11:41.747 "name": "BaseBdev1", 00:11:41.747 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:41.747 "is_configured": true, 00:11:41.747 "data_offset": 2048, 00:11:41.747 "data_size": 63488 00:11:41.747 }, 00:11:41.747 { 00:11:41.747 "name": "BaseBdev2", 00:11:41.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.747 "is_configured": false, 00:11:41.747 "data_offset": 0, 00:11:41.747 "data_size": 0 00:11:41.747 }, 00:11:41.747 { 00:11:41.747 "name": "BaseBdev3", 00:11:41.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.747 "is_configured": false, 00:11:41.747 "data_offset": 0, 00:11:41.747 "data_size": 0 00:11:41.747 }, 00:11:41.747 { 00:11:41.747 "name": "BaseBdev4", 00:11:41.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.747 "is_configured": false, 00:11:41.747 "data_offset": 0, 00:11:41.747 "data_size": 0 00:11:41.747 } 00:11:41.747 ] 00:11:41.747 }' 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.747 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.314 [2024-11-10 15:20:48.493673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.314 BaseBdev2 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:42.314 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.315 [ 00:11:42.315 { 00:11:42.315 "name": "BaseBdev2", 00:11:42.315 "aliases": [ 00:11:42.315 "3efd3b12-064e-440f-abb2-d37a60d3751f" 00:11:42.315 ], 00:11:42.315 "product_name": "Malloc disk", 00:11:42.315 "block_size": 512, 00:11:42.315 "num_blocks": 65536, 00:11:42.315 "uuid": "3efd3b12-064e-440f-abb2-d37a60d3751f", 00:11:42.315 "assigned_rate_limits": { 00:11:42.315 "rw_ios_per_sec": 0, 00:11:42.315 "rw_mbytes_per_sec": 0, 00:11:42.315 "r_mbytes_per_sec": 0, 00:11:42.315 "w_mbytes_per_sec": 0 00:11:42.315 }, 00:11:42.315 "claimed": true, 00:11:42.315 "claim_type": "exclusive_write", 00:11:42.315 "zoned": false, 00:11:42.315 "supported_io_types": { 00:11:42.315 "read": true, 00:11:42.315 "write": true, 00:11:42.315 "unmap": true, 00:11:42.315 "flush": true, 00:11:42.315 "reset": true, 00:11:42.315 "nvme_admin": false, 00:11:42.315 "nvme_io": false, 00:11:42.315 "nvme_io_md": false, 00:11:42.315 "write_zeroes": true, 00:11:42.315 "zcopy": true, 00:11:42.315 "get_zone_info": false, 00:11:42.315 "zone_management": false, 00:11:42.315 "zone_append": false, 00:11:42.315 "compare": false, 00:11:42.315 "compare_and_write": false, 00:11:42.315 "abort": true, 00:11:42.315 "seek_hole": false, 00:11:42.315 "seek_data": false, 00:11:42.315 "copy": true, 00:11:42.315 "nvme_iov_md": false 00:11:42.315 }, 00:11:42.315 "memory_domains": [ 00:11:42.315 { 00:11:42.315 "dma_device_id": "system", 00:11:42.315 "dma_device_type": 1 00:11:42.315 }, 00:11:42.315 { 00:11:42.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.315 "dma_device_type": 2 00:11:42.315 } 00:11:42.315 ], 00:11:42.315 "driver_specific": {} 00:11:42.315 } 00:11:42.315 ] 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.315 "name": "Existed_Raid", 00:11:42.315 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:42.315 "strip_size_kb": 0, 00:11:42.315 "state": "configuring", 00:11:42.315 "raid_level": "raid1", 00:11:42.315 "superblock": true, 00:11:42.315 "num_base_bdevs": 4, 00:11:42.315 "num_base_bdevs_discovered": 2, 00:11:42.315 "num_base_bdevs_operational": 4, 00:11:42.315 "base_bdevs_list": [ 00:11:42.315 { 00:11:42.315 "name": "BaseBdev1", 00:11:42.315 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:42.315 "is_configured": true, 00:11:42.315 "data_offset": 2048, 00:11:42.315 "data_size": 63488 00:11:42.315 }, 00:11:42.315 { 00:11:42.315 "name": "BaseBdev2", 00:11:42.315 "uuid": "3efd3b12-064e-440f-abb2-d37a60d3751f", 00:11:42.315 "is_configured": true, 00:11:42.315 "data_offset": 2048, 00:11:42.315 "data_size": 63488 00:11:42.315 }, 00:11:42.315 { 00:11:42.315 "name": "BaseBdev3", 00:11:42.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.315 "is_configured": false, 00:11:42.315 "data_offset": 0, 00:11:42.315 "data_size": 0 00:11:42.315 }, 00:11:42.315 { 00:11:42.315 "name": "BaseBdev4", 00:11:42.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.315 "is_configured": false, 00:11:42.315 "data_offset": 0, 00:11:42.315 "data_size": 0 00:11:42.315 } 00:11:42.315 ] 00:11:42.315 }' 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.315 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.883 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:42.883 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.883 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 [2024-11-10 15:20:48.996341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.884 BaseBdev3 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.884 15:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 [ 00:11:42.884 { 00:11:42.884 "name": "BaseBdev3", 00:11:42.884 "aliases": [ 00:11:42.884 "a95c4bb1-7b1a-4395-8742-42cf3ab2d4dc" 00:11:42.884 ], 00:11:42.884 "product_name": "Malloc disk", 00:11:42.884 "block_size": 512, 00:11:42.884 "num_blocks": 65536, 00:11:42.884 "uuid": "a95c4bb1-7b1a-4395-8742-42cf3ab2d4dc", 00:11:42.884 "assigned_rate_limits": { 00:11:42.884 "rw_ios_per_sec": 0, 00:11:42.884 "rw_mbytes_per_sec": 0, 00:11:42.884 "r_mbytes_per_sec": 0, 00:11:42.884 "w_mbytes_per_sec": 0 00:11:42.884 }, 00:11:42.884 "claimed": true, 00:11:42.884 "claim_type": "exclusive_write", 00:11:42.884 "zoned": false, 00:11:42.884 "supported_io_types": { 00:11:42.884 "read": true, 00:11:42.884 "write": true, 00:11:42.884 "unmap": true, 00:11:42.884 "flush": true, 00:11:42.884 "reset": true, 00:11:42.884 "nvme_admin": false, 00:11:42.884 "nvme_io": false, 00:11:42.884 "nvme_io_md": false, 00:11:42.884 "write_zeroes": true, 00:11:42.884 "zcopy": true, 00:11:42.884 "get_zone_info": false, 00:11:42.884 "zone_management": false, 00:11:42.884 "zone_append": false, 00:11:42.884 "compare": false, 00:11:42.884 "compare_and_write": false, 00:11:42.884 "abort": true, 00:11:42.884 "seek_hole": false, 00:11:42.884 "seek_data": false, 00:11:42.884 "copy": true, 00:11:42.884 "nvme_iov_md": false 00:11:42.884 }, 00:11:42.884 "memory_domains": [ 00:11:42.884 { 00:11:42.884 "dma_device_id": "system", 00:11:42.884 "dma_device_type": 1 00:11:42.884 }, 00:11:42.884 { 00:11:42.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.884 "dma_device_type": 2 00:11:42.884 } 00:11:42.884 ], 00:11:42.884 "driver_specific": {} 00:11:42.884 } 00:11:42.884 ] 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.884 "name": "Existed_Raid", 00:11:42.884 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:42.884 "strip_size_kb": 0, 00:11:42.884 "state": "configuring", 00:11:42.884 "raid_level": "raid1", 00:11:42.884 "superblock": true, 00:11:42.884 "num_base_bdevs": 4, 00:11:42.884 "num_base_bdevs_discovered": 3, 00:11:42.884 "num_base_bdevs_operational": 4, 00:11:42.884 "base_bdevs_list": [ 00:11:42.884 { 00:11:42.884 "name": "BaseBdev1", 00:11:42.884 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:42.884 "is_configured": true, 00:11:42.884 "data_offset": 2048, 00:11:42.884 "data_size": 63488 00:11:42.884 }, 00:11:42.884 { 00:11:42.884 "name": "BaseBdev2", 00:11:42.884 "uuid": "3efd3b12-064e-440f-abb2-d37a60d3751f", 00:11:42.884 "is_configured": true, 00:11:42.884 "data_offset": 2048, 00:11:42.884 "data_size": 63488 00:11:42.884 }, 00:11:42.884 { 00:11:42.884 "name": "BaseBdev3", 00:11:42.884 "uuid": "a95c4bb1-7b1a-4395-8742-42cf3ab2d4dc", 00:11:42.884 "is_configured": true, 00:11:42.884 "data_offset": 2048, 00:11:42.884 "data_size": 63488 00:11:42.884 }, 00:11:42.884 { 00:11:42.884 "name": "BaseBdev4", 00:11:42.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.884 "is_configured": false, 00:11:42.884 "data_offset": 0, 00:11:42.884 "data_size": 0 00:11:42.884 } 00:11:42.884 ] 00:11:42.884 }' 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.884 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.143 [2024-11-10 15:20:49.453545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:43.143 [2024-11-10 15:20:49.453786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:43.143 [2024-11-10 15:20:49.453809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.143 [2024-11-10 15:20:49.454206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:43.143 BaseBdev4 00:11:43.143 [2024-11-10 15:20:49.454380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:43.143 [2024-11-10 15:20:49.454392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:11:43.143 [2024-11-10 15:20:49.454541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.143 [ 00:11:43.143 { 00:11:43.143 "name": "BaseBdev4", 00:11:43.143 "aliases": [ 00:11:43.143 "116b056a-76a4-4ac3-960d-414ab7a4708a" 00:11:43.143 ], 00:11:43.143 "product_name": "Malloc disk", 00:11:43.143 "block_size": 512, 00:11:43.143 "num_blocks": 65536, 00:11:43.143 "uuid": "116b056a-76a4-4ac3-960d-414ab7a4708a", 00:11:43.143 "assigned_rate_limits": { 00:11:43.143 "rw_ios_per_sec": 0, 00:11:43.143 "rw_mbytes_per_sec": 0, 00:11:43.143 "r_mbytes_per_sec": 0, 00:11:43.143 "w_mbytes_per_sec": 0 00:11:43.143 }, 00:11:43.143 "claimed": true, 00:11:43.143 "claim_type": "exclusive_write", 00:11:43.143 "zoned": false, 00:11:43.143 "supported_io_types": { 00:11:43.143 "read": true, 00:11:43.143 "write": true, 00:11:43.143 "unmap": true, 00:11:43.143 "flush": true, 00:11:43.143 "reset": true, 00:11:43.143 "nvme_admin": false, 00:11:43.143 "nvme_io": false, 00:11:43.143 "nvme_io_md": false, 00:11:43.143 "write_zeroes": true, 00:11:43.143 "zcopy": true, 00:11:43.143 "get_zone_info": false, 00:11:43.143 "zone_management": false, 00:11:43.143 "zone_append": false, 00:11:43.143 "compare": false, 00:11:43.143 "compare_and_write": false, 00:11:43.143 "abort": true, 00:11:43.143 "seek_hole": false, 00:11:43.143 "seek_data": false, 00:11:43.143 "copy": true, 00:11:43.143 "nvme_iov_md": false 00:11:43.143 }, 00:11:43.143 "memory_domains": [ 00:11:43.143 { 00:11:43.143 "dma_device_id": "system", 00:11:43.143 "dma_device_type": 1 00:11:43.143 }, 00:11:43.143 { 00:11:43.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.143 "dma_device_type": 2 00:11:43.143 } 00:11:43.143 ], 00:11:43.143 "driver_specific": {} 00:11:43.143 } 00:11:43.143 ] 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.143 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.403 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.403 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.403 "name": "Existed_Raid", 00:11:43.403 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:43.403 "strip_size_kb": 0, 00:11:43.403 "state": "online", 00:11:43.403 "raid_level": "raid1", 00:11:43.403 "superblock": true, 00:11:43.403 "num_base_bdevs": 4, 00:11:43.403 "num_base_bdevs_discovered": 4, 00:11:43.403 "num_base_bdevs_operational": 4, 00:11:43.403 "base_bdevs_list": [ 00:11:43.403 { 00:11:43.403 "name": "BaseBdev1", 00:11:43.403 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:43.403 "is_configured": true, 00:11:43.403 "data_offset": 2048, 00:11:43.403 "data_size": 63488 00:11:43.403 }, 00:11:43.403 { 00:11:43.403 "name": "BaseBdev2", 00:11:43.403 "uuid": "3efd3b12-064e-440f-abb2-d37a60d3751f", 00:11:43.403 "is_configured": true, 00:11:43.403 "data_offset": 2048, 00:11:43.403 "data_size": 63488 00:11:43.403 }, 00:11:43.403 { 00:11:43.403 "name": "BaseBdev3", 00:11:43.403 "uuid": "a95c4bb1-7b1a-4395-8742-42cf3ab2d4dc", 00:11:43.403 "is_configured": true, 00:11:43.403 "data_offset": 2048, 00:11:43.403 "data_size": 63488 00:11:43.403 }, 00:11:43.403 { 00:11:43.403 "name": "BaseBdev4", 00:11:43.403 "uuid": "116b056a-76a4-4ac3-960d-414ab7a4708a", 00:11:43.403 "is_configured": true, 00:11:43.403 "data_offset": 2048, 00:11:43.403 "data_size": 63488 00:11:43.403 } 00:11:43.403 ] 00:11:43.403 }' 00:11:43.403 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.403 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:43.666 [2024-11-10 15:20:49.922112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.666 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:43.666 "name": "Existed_Raid", 00:11:43.666 "aliases": [ 00:11:43.666 "9201df3c-3db0-4f58-b5c4-74401d921ba6" 00:11:43.666 ], 00:11:43.666 "product_name": "Raid Volume", 00:11:43.666 "block_size": 512, 00:11:43.666 "num_blocks": 63488, 00:11:43.666 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:43.666 "assigned_rate_limits": { 00:11:43.666 "rw_ios_per_sec": 0, 00:11:43.666 "rw_mbytes_per_sec": 0, 00:11:43.666 "r_mbytes_per_sec": 0, 00:11:43.666 "w_mbytes_per_sec": 0 00:11:43.666 }, 00:11:43.666 "claimed": false, 00:11:43.666 "zoned": false, 00:11:43.666 "supported_io_types": { 00:11:43.666 "read": true, 00:11:43.666 "write": true, 00:11:43.666 "unmap": false, 00:11:43.666 "flush": false, 00:11:43.666 "reset": true, 00:11:43.666 "nvme_admin": false, 00:11:43.666 "nvme_io": false, 00:11:43.666 "nvme_io_md": false, 00:11:43.666 "write_zeroes": true, 00:11:43.666 "zcopy": false, 00:11:43.666 "get_zone_info": false, 00:11:43.666 "zone_management": false, 00:11:43.666 "zone_append": false, 00:11:43.666 "compare": false, 00:11:43.666 "compare_and_write": false, 00:11:43.666 "abort": false, 00:11:43.666 "seek_hole": false, 00:11:43.666 "seek_data": false, 00:11:43.666 "copy": false, 00:11:43.666 "nvme_iov_md": false 00:11:43.666 }, 00:11:43.666 "memory_domains": [ 00:11:43.666 { 00:11:43.666 "dma_device_id": "system", 00:11:43.666 "dma_device_type": 1 00:11:43.666 }, 00:11:43.666 { 00:11:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.666 "dma_device_type": 2 00:11:43.666 }, 00:11:43.666 { 00:11:43.666 "dma_device_id": "system", 00:11:43.666 "dma_device_type": 1 00:11:43.666 }, 00:11:43.666 { 00:11:43.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.667 "dma_device_type": 2 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "dma_device_id": "system", 00:11:43.667 "dma_device_type": 1 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.667 "dma_device_type": 2 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "dma_device_id": "system", 00:11:43.667 "dma_device_type": 1 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.667 "dma_device_type": 2 00:11:43.667 } 00:11:43.667 ], 00:11:43.667 "driver_specific": { 00:11:43.667 "raid": { 00:11:43.667 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:43.667 "strip_size_kb": 0, 00:11:43.667 "state": "online", 00:11:43.667 "raid_level": "raid1", 00:11:43.667 "superblock": true, 00:11:43.667 "num_base_bdevs": 4, 00:11:43.667 "num_base_bdevs_discovered": 4, 00:11:43.667 "num_base_bdevs_operational": 4, 00:11:43.667 "base_bdevs_list": [ 00:11:43.667 { 00:11:43.667 "name": "BaseBdev1", 00:11:43.667 "uuid": "dc3f1827-bfd1-4992-8c27-c2718210e022", 00:11:43.667 "is_configured": true, 00:11:43.667 "data_offset": 2048, 00:11:43.667 "data_size": 63488 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "name": "BaseBdev2", 00:11:43.667 "uuid": "3efd3b12-064e-440f-abb2-d37a60d3751f", 00:11:43.667 "is_configured": true, 00:11:43.667 "data_offset": 2048, 00:11:43.667 "data_size": 63488 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "name": "BaseBdev3", 00:11:43.667 "uuid": "a95c4bb1-7b1a-4395-8742-42cf3ab2d4dc", 00:11:43.667 "is_configured": true, 00:11:43.667 "data_offset": 2048, 00:11:43.667 "data_size": 63488 00:11:43.667 }, 00:11:43.667 { 00:11:43.667 "name": "BaseBdev4", 00:11:43.667 "uuid": "116b056a-76a4-4ac3-960d-414ab7a4708a", 00:11:43.667 "is_configured": true, 00:11:43.667 "data_offset": 2048, 00:11:43.667 "data_size": 63488 00:11:43.667 } 00:11:43.667 ] 00:11:43.667 } 00:11:43.667 } 00:11:43.667 }' 00:11:43.667 15:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:43.667 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:43.667 BaseBdev2 00:11:43.667 BaseBdev3 00:11:43.667 BaseBdev4' 00:11:43.667 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.926 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 [2024-11-10 15:20:50.233935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:43.927 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.186 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.186 "name": "Existed_Raid", 00:11:44.186 "uuid": "9201df3c-3db0-4f58-b5c4-74401d921ba6", 00:11:44.186 "strip_size_kb": 0, 00:11:44.186 "state": "online", 00:11:44.186 "raid_level": "raid1", 00:11:44.186 "superblock": true, 00:11:44.186 "num_base_bdevs": 4, 00:11:44.186 "num_base_bdevs_discovered": 3, 00:11:44.186 "num_base_bdevs_operational": 3, 00:11:44.186 "base_bdevs_list": [ 00:11:44.186 { 00:11:44.186 "name": null, 00:11:44.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.186 "is_configured": false, 00:11:44.186 "data_offset": 0, 00:11:44.186 "data_size": 63488 00:11:44.186 }, 00:11:44.186 { 00:11:44.186 "name": "BaseBdev2", 00:11:44.186 "uuid": "3efd3b12-064e-440f-abb2-d37a60d3751f", 00:11:44.186 "is_configured": true, 00:11:44.186 "data_offset": 2048, 00:11:44.186 "data_size": 63488 00:11:44.186 }, 00:11:44.186 { 00:11:44.186 "name": "BaseBdev3", 00:11:44.186 "uuid": "a95c4bb1-7b1a-4395-8742-42cf3ab2d4dc", 00:11:44.186 "is_configured": true, 00:11:44.186 "data_offset": 2048, 00:11:44.186 "data_size": 63488 00:11:44.186 }, 00:11:44.186 { 00:11:44.186 "name": "BaseBdev4", 00:11:44.186 "uuid": "116b056a-76a4-4ac3-960d-414ab7a4708a", 00:11:44.186 "is_configured": true, 00:11:44.186 "data_offset": 2048, 00:11:44.186 "data_size": 63488 00:11:44.186 } 00:11:44.186 ] 00:11:44.186 }' 00:11:44.186 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.186 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.445 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.446 [2024-11-10 15:20:50.743122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.446 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 [2024-11-10 15:20:50.823979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.705 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.705 [2024-11-10 15:20:50.900289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:44.706 [2024-11-10 15:20:50.900433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.706 [2024-11-10 15:20:50.922070] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.706 [2024-11-10 15:20:50.922204] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.706 [2024-11-10 15:20:50.922252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 BaseBdev2 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 [ 00:11:44.706 { 00:11:44.706 "name": "BaseBdev2", 00:11:44.706 "aliases": [ 00:11:44.706 "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a" 00:11:44.706 ], 00:11:44.706 "product_name": "Malloc disk", 00:11:44.706 "block_size": 512, 00:11:44.706 "num_blocks": 65536, 00:11:44.706 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:44.706 "assigned_rate_limits": { 00:11:44.706 "rw_ios_per_sec": 0, 00:11:44.706 "rw_mbytes_per_sec": 0, 00:11:44.706 "r_mbytes_per_sec": 0, 00:11:44.706 "w_mbytes_per_sec": 0 00:11:44.706 }, 00:11:44.706 "claimed": false, 00:11:44.706 "zoned": false, 00:11:44.706 "supported_io_types": { 00:11:44.706 "read": true, 00:11:44.706 "write": true, 00:11:44.706 "unmap": true, 00:11:44.706 "flush": true, 00:11:44.706 "reset": true, 00:11:44.706 "nvme_admin": false, 00:11:44.706 "nvme_io": false, 00:11:44.706 "nvme_io_md": false, 00:11:44.706 "write_zeroes": true, 00:11:44.706 "zcopy": true, 00:11:44.706 "get_zone_info": false, 00:11:44.706 "zone_management": false, 00:11:44.706 "zone_append": false, 00:11:44.706 "compare": false, 00:11:44.706 "compare_and_write": false, 00:11:44.706 "abort": true, 00:11:44.706 "seek_hole": false, 00:11:44.706 "seek_data": false, 00:11:44.706 "copy": true, 00:11:44.706 "nvme_iov_md": false 00:11:44.706 }, 00:11:44.706 "memory_domains": [ 00:11:44.706 { 00:11:44.706 "dma_device_id": "system", 00:11:44.706 "dma_device_type": 1 00:11:44.706 }, 00:11:44.706 { 00:11:44.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.706 "dma_device_type": 2 00:11:44.706 } 00:11:44.706 ], 00:11:44.706 "driver_specific": {} 00:11:44.706 } 00:11:44.706 ] 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 BaseBdev3 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.706 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 [ 00:11:44.971 { 00:11:44.971 "name": "BaseBdev3", 00:11:44.971 "aliases": [ 00:11:44.971 "11850787-238e-43df-97c9-f5251312ed17" 00:11:44.971 ], 00:11:44.971 "product_name": "Malloc disk", 00:11:44.971 "block_size": 512, 00:11:44.971 "num_blocks": 65536, 00:11:44.971 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:44.971 "assigned_rate_limits": { 00:11:44.971 "rw_ios_per_sec": 0, 00:11:44.971 "rw_mbytes_per_sec": 0, 00:11:44.971 "r_mbytes_per_sec": 0, 00:11:44.971 "w_mbytes_per_sec": 0 00:11:44.971 }, 00:11:44.971 "claimed": false, 00:11:44.971 "zoned": false, 00:11:44.971 "supported_io_types": { 00:11:44.971 "read": true, 00:11:44.971 "write": true, 00:11:44.971 "unmap": true, 00:11:44.971 "flush": true, 00:11:44.971 "reset": true, 00:11:44.971 "nvme_admin": false, 00:11:44.971 "nvme_io": false, 00:11:44.971 "nvme_io_md": false, 00:11:44.971 "write_zeroes": true, 00:11:44.971 "zcopy": true, 00:11:44.971 "get_zone_info": false, 00:11:44.971 "zone_management": false, 00:11:44.971 "zone_append": false, 00:11:44.971 "compare": false, 00:11:44.971 "compare_and_write": false, 00:11:44.971 "abort": true, 00:11:44.971 "seek_hole": false, 00:11:44.971 "seek_data": false, 00:11:44.971 "copy": true, 00:11:44.971 "nvme_iov_md": false 00:11:44.971 }, 00:11:44.971 "memory_domains": [ 00:11:44.971 { 00:11:44.971 "dma_device_id": "system", 00:11:44.971 "dma_device_type": 1 00:11:44.971 }, 00:11:44.971 { 00:11:44.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.971 "dma_device_type": 2 00:11:44.971 } 00:11:44.971 ], 00:11:44.971 "driver_specific": {} 00:11:44.971 } 00:11:44.971 ] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 BaseBdev4 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 [ 00:11:44.971 { 00:11:44.971 "name": "BaseBdev4", 00:11:44.971 "aliases": [ 00:11:44.971 "ccdef0ae-e6e3-4d3e-a159-22e28d758beb" 00:11:44.971 ], 00:11:44.971 "product_name": "Malloc disk", 00:11:44.971 "block_size": 512, 00:11:44.971 "num_blocks": 65536, 00:11:44.971 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:44.971 "assigned_rate_limits": { 00:11:44.971 "rw_ios_per_sec": 0, 00:11:44.971 "rw_mbytes_per_sec": 0, 00:11:44.971 "r_mbytes_per_sec": 0, 00:11:44.971 "w_mbytes_per_sec": 0 00:11:44.971 }, 00:11:44.971 "claimed": false, 00:11:44.971 "zoned": false, 00:11:44.971 "supported_io_types": { 00:11:44.971 "read": true, 00:11:44.971 "write": true, 00:11:44.971 "unmap": true, 00:11:44.971 "flush": true, 00:11:44.971 "reset": true, 00:11:44.971 "nvme_admin": false, 00:11:44.971 "nvme_io": false, 00:11:44.971 "nvme_io_md": false, 00:11:44.971 "write_zeroes": true, 00:11:44.971 "zcopy": true, 00:11:44.971 "get_zone_info": false, 00:11:44.971 "zone_management": false, 00:11:44.971 "zone_append": false, 00:11:44.971 "compare": false, 00:11:44.971 "compare_and_write": false, 00:11:44.971 "abort": true, 00:11:44.971 "seek_hole": false, 00:11:44.971 "seek_data": false, 00:11:44.971 "copy": true, 00:11:44.971 "nvme_iov_md": false 00:11:44.971 }, 00:11:44.971 "memory_domains": [ 00:11:44.971 { 00:11:44.971 "dma_device_id": "system", 00:11:44.971 "dma_device_type": 1 00:11:44.971 }, 00:11:44.971 { 00:11:44.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.971 "dma_device_type": 2 00:11:44.971 } 00:11:44.971 ], 00:11:44.971 "driver_specific": {} 00:11:44.971 } 00:11:44.971 ] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 [2024-11-10 15:20:51.159700] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.971 [2024-11-10 15:20:51.159836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.971 [2024-11-10 15:20:51.159897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.971 [2024-11-10 15:20:51.162484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.971 [2024-11-10 15:20:51.162593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.971 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.972 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.972 "name": "Existed_Raid", 00:11:44.972 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:44.972 "strip_size_kb": 0, 00:11:44.972 "state": "configuring", 00:11:44.972 "raid_level": "raid1", 00:11:44.972 "superblock": true, 00:11:44.972 "num_base_bdevs": 4, 00:11:44.972 "num_base_bdevs_discovered": 3, 00:11:44.972 "num_base_bdevs_operational": 4, 00:11:44.972 "base_bdevs_list": [ 00:11:44.972 { 00:11:44.972 "name": "BaseBdev1", 00:11:44.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.972 "is_configured": false, 00:11:44.972 "data_offset": 0, 00:11:44.972 "data_size": 0 00:11:44.972 }, 00:11:44.972 { 00:11:44.972 "name": "BaseBdev2", 00:11:44.972 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:44.972 "is_configured": true, 00:11:44.972 "data_offset": 2048, 00:11:44.972 "data_size": 63488 00:11:44.972 }, 00:11:44.972 { 00:11:44.972 "name": "BaseBdev3", 00:11:44.972 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:44.972 "is_configured": true, 00:11:44.972 "data_offset": 2048, 00:11:44.972 "data_size": 63488 00:11:44.972 }, 00:11:44.972 { 00:11:44.972 "name": "BaseBdev4", 00:11:44.972 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:44.972 "is_configured": true, 00:11:44.972 "data_offset": 2048, 00:11:44.972 "data_size": 63488 00:11:44.972 } 00:11:44.972 ] 00:11:44.972 }' 00:11:44.972 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.972 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.235 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:45.235 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.235 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.235 [2024-11-10 15:20:51.591851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.494 "name": "Existed_Raid", 00:11:45.494 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:45.494 "strip_size_kb": 0, 00:11:45.494 "state": "configuring", 00:11:45.494 "raid_level": "raid1", 00:11:45.494 "superblock": true, 00:11:45.494 "num_base_bdevs": 4, 00:11:45.494 "num_base_bdevs_discovered": 2, 00:11:45.494 "num_base_bdevs_operational": 4, 00:11:45.494 "base_bdevs_list": [ 00:11:45.494 { 00:11:45.494 "name": "BaseBdev1", 00:11:45.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.494 "is_configured": false, 00:11:45.494 "data_offset": 0, 00:11:45.494 "data_size": 0 00:11:45.494 }, 00:11:45.494 { 00:11:45.494 "name": null, 00:11:45.494 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:45.494 "is_configured": false, 00:11:45.494 "data_offset": 0, 00:11:45.494 "data_size": 63488 00:11:45.494 }, 00:11:45.494 { 00:11:45.494 "name": "BaseBdev3", 00:11:45.494 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:45.494 "is_configured": true, 00:11:45.494 "data_offset": 2048, 00:11:45.494 "data_size": 63488 00:11:45.494 }, 00:11:45.494 { 00:11:45.494 "name": "BaseBdev4", 00:11:45.494 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:45.494 "is_configured": true, 00:11:45.494 "data_offset": 2048, 00:11:45.494 "data_size": 63488 00:11:45.494 } 00:11:45.494 ] 00:11:45.494 }' 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.494 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.753 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.753 15:20:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:45.753 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.753 15:20:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.753 [2024-11-10 15:20:52.052716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:45.753 BaseBdev1 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.753 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.753 [ 00:11:45.753 { 00:11:45.753 "name": "BaseBdev1", 00:11:45.753 "aliases": [ 00:11:45.753 "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9" 00:11:45.753 ], 00:11:45.753 "product_name": "Malloc disk", 00:11:45.753 "block_size": 512, 00:11:45.753 "num_blocks": 65536, 00:11:45.753 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:45.753 "assigned_rate_limits": { 00:11:45.753 "rw_ios_per_sec": 0, 00:11:45.753 "rw_mbytes_per_sec": 0, 00:11:45.753 "r_mbytes_per_sec": 0, 00:11:45.753 "w_mbytes_per_sec": 0 00:11:45.753 }, 00:11:45.753 "claimed": true, 00:11:45.753 "claim_type": "exclusive_write", 00:11:45.753 "zoned": false, 00:11:45.753 "supported_io_types": { 00:11:45.753 "read": true, 00:11:45.753 "write": true, 00:11:45.753 "unmap": true, 00:11:45.753 "flush": true, 00:11:45.753 "reset": true, 00:11:45.753 "nvme_admin": false, 00:11:45.753 "nvme_io": false, 00:11:45.753 "nvme_io_md": false, 00:11:45.753 "write_zeroes": true, 00:11:45.753 "zcopy": true, 00:11:45.753 "get_zone_info": false, 00:11:45.753 "zone_management": false, 00:11:45.753 "zone_append": false, 00:11:45.753 "compare": false, 00:11:45.753 "compare_and_write": false, 00:11:45.753 "abort": true, 00:11:45.753 "seek_hole": false, 00:11:45.753 "seek_data": false, 00:11:45.753 "copy": true, 00:11:45.753 "nvme_iov_md": false 00:11:45.753 }, 00:11:45.754 "memory_domains": [ 00:11:45.754 { 00:11:45.754 "dma_device_id": "system", 00:11:45.754 "dma_device_type": 1 00:11:45.754 }, 00:11:45.754 { 00:11:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.754 "dma_device_type": 2 00:11:45.754 } 00:11:45.754 ], 00:11:45.754 "driver_specific": {} 00:11:45.754 } 00:11:45.754 ] 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.754 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.013 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.013 "name": "Existed_Raid", 00:11:46.013 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:46.013 "strip_size_kb": 0, 00:11:46.013 "state": "configuring", 00:11:46.013 "raid_level": "raid1", 00:11:46.013 "superblock": true, 00:11:46.013 "num_base_bdevs": 4, 00:11:46.013 "num_base_bdevs_discovered": 3, 00:11:46.013 "num_base_bdevs_operational": 4, 00:11:46.013 "base_bdevs_list": [ 00:11:46.013 { 00:11:46.013 "name": "BaseBdev1", 00:11:46.013 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:46.013 "is_configured": true, 00:11:46.013 "data_offset": 2048, 00:11:46.013 "data_size": 63488 00:11:46.013 }, 00:11:46.013 { 00:11:46.013 "name": null, 00:11:46.013 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:46.013 "is_configured": false, 00:11:46.013 "data_offset": 0, 00:11:46.013 "data_size": 63488 00:11:46.013 }, 00:11:46.013 { 00:11:46.013 "name": "BaseBdev3", 00:11:46.013 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:46.013 "is_configured": true, 00:11:46.013 "data_offset": 2048, 00:11:46.013 "data_size": 63488 00:11:46.013 }, 00:11:46.013 { 00:11:46.013 "name": "BaseBdev4", 00:11:46.013 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:46.013 "is_configured": true, 00:11:46.013 "data_offset": 2048, 00:11:46.013 "data_size": 63488 00:11:46.013 } 00:11:46.013 ] 00:11:46.013 }' 00:11:46.013 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.013 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.272 [2024-11-10 15:20:52.613026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.272 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.531 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.531 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.531 "name": "Existed_Raid", 00:11:46.531 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:46.531 "strip_size_kb": 0, 00:11:46.531 "state": "configuring", 00:11:46.531 "raid_level": "raid1", 00:11:46.531 "superblock": true, 00:11:46.531 "num_base_bdevs": 4, 00:11:46.531 "num_base_bdevs_discovered": 2, 00:11:46.531 "num_base_bdevs_operational": 4, 00:11:46.531 "base_bdevs_list": [ 00:11:46.531 { 00:11:46.531 "name": "BaseBdev1", 00:11:46.531 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:46.531 "is_configured": true, 00:11:46.531 "data_offset": 2048, 00:11:46.531 "data_size": 63488 00:11:46.531 }, 00:11:46.531 { 00:11:46.531 "name": null, 00:11:46.531 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:46.531 "is_configured": false, 00:11:46.531 "data_offset": 0, 00:11:46.531 "data_size": 63488 00:11:46.531 }, 00:11:46.531 { 00:11:46.531 "name": null, 00:11:46.531 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:46.531 "is_configured": false, 00:11:46.531 "data_offset": 0, 00:11:46.531 "data_size": 63488 00:11:46.531 }, 00:11:46.531 { 00:11:46.531 "name": "BaseBdev4", 00:11:46.531 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:46.531 "is_configured": true, 00:11:46.531 "data_offset": 2048, 00:11:46.531 "data_size": 63488 00:11:46.531 } 00:11:46.531 ] 00:11:46.531 }' 00:11:46.531 15:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.531 15:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.791 [2024-11-10 15:20:53.109188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.791 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.050 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.050 "name": "Existed_Raid", 00:11:47.050 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:47.050 "strip_size_kb": 0, 00:11:47.050 "state": "configuring", 00:11:47.050 "raid_level": "raid1", 00:11:47.050 "superblock": true, 00:11:47.050 "num_base_bdevs": 4, 00:11:47.050 "num_base_bdevs_discovered": 3, 00:11:47.050 "num_base_bdevs_operational": 4, 00:11:47.050 "base_bdevs_list": [ 00:11:47.050 { 00:11:47.050 "name": "BaseBdev1", 00:11:47.050 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:47.050 "is_configured": true, 00:11:47.050 "data_offset": 2048, 00:11:47.050 "data_size": 63488 00:11:47.050 }, 00:11:47.050 { 00:11:47.050 "name": null, 00:11:47.050 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:47.050 "is_configured": false, 00:11:47.050 "data_offset": 0, 00:11:47.050 "data_size": 63488 00:11:47.050 }, 00:11:47.050 { 00:11:47.050 "name": "BaseBdev3", 00:11:47.050 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:47.050 "is_configured": true, 00:11:47.050 "data_offset": 2048, 00:11:47.050 "data_size": 63488 00:11:47.050 }, 00:11:47.050 { 00:11:47.050 "name": "BaseBdev4", 00:11:47.050 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:47.050 "is_configured": true, 00:11:47.050 "data_offset": 2048, 00:11:47.050 "data_size": 63488 00:11:47.050 } 00:11:47.050 ] 00:11:47.050 }' 00:11:47.051 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.051 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.310 [2024-11-10 15:20:53.553376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.310 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.310 "name": "Existed_Raid", 00:11:47.310 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:47.310 "strip_size_kb": 0, 00:11:47.310 "state": "configuring", 00:11:47.310 "raid_level": "raid1", 00:11:47.310 "superblock": true, 00:11:47.310 "num_base_bdevs": 4, 00:11:47.310 "num_base_bdevs_discovered": 2, 00:11:47.311 "num_base_bdevs_operational": 4, 00:11:47.311 "base_bdevs_list": [ 00:11:47.311 { 00:11:47.311 "name": null, 00:11:47.311 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:47.311 "is_configured": false, 00:11:47.311 "data_offset": 0, 00:11:47.311 "data_size": 63488 00:11:47.311 }, 00:11:47.311 { 00:11:47.311 "name": null, 00:11:47.311 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:47.311 "is_configured": false, 00:11:47.311 "data_offset": 0, 00:11:47.311 "data_size": 63488 00:11:47.311 }, 00:11:47.311 { 00:11:47.311 "name": "BaseBdev3", 00:11:47.311 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:47.311 "is_configured": true, 00:11:47.311 "data_offset": 2048, 00:11:47.311 "data_size": 63488 00:11:47.311 }, 00:11:47.311 { 00:11:47.311 "name": "BaseBdev4", 00:11:47.311 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:47.311 "is_configured": true, 00:11:47.311 "data_offset": 2048, 00:11:47.311 "data_size": 63488 00:11:47.311 } 00:11:47.311 ] 00:11:47.311 }' 00:11:47.311 15:20:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.311 15:20:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.879 [2024-11-10 15:20:54.069144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.879 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.880 "name": "Existed_Raid", 00:11:47.880 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:47.880 "strip_size_kb": 0, 00:11:47.880 "state": "configuring", 00:11:47.880 "raid_level": "raid1", 00:11:47.880 "superblock": true, 00:11:47.880 "num_base_bdevs": 4, 00:11:47.880 "num_base_bdevs_discovered": 3, 00:11:47.880 "num_base_bdevs_operational": 4, 00:11:47.880 "base_bdevs_list": [ 00:11:47.880 { 00:11:47.880 "name": null, 00:11:47.880 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:47.880 "is_configured": false, 00:11:47.880 "data_offset": 0, 00:11:47.880 "data_size": 63488 00:11:47.880 }, 00:11:47.880 { 00:11:47.880 "name": "BaseBdev2", 00:11:47.880 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:47.880 "is_configured": true, 00:11:47.880 "data_offset": 2048, 00:11:47.880 "data_size": 63488 00:11:47.880 }, 00:11:47.880 { 00:11:47.880 "name": "BaseBdev3", 00:11:47.880 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:47.880 "is_configured": true, 00:11:47.880 "data_offset": 2048, 00:11:47.880 "data_size": 63488 00:11:47.880 }, 00:11:47.880 { 00:11:47.880 "name": "BaseBdev4", 00:11:47.880 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:47.880 "is_configured": true, 00:11:47.880 "data_offset": 2048, 00:11:47.880 "data_size": 63488 00:11:47.880 } 00:11:47.880 ] 00:11:47.880 }' 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.880 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.139 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.139 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.139 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 [2024-11-10 15:20:54.598034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:48.399 NewBaseBdev 00:11:48.399 [2024-11-10 15:20:54.598366] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:48.399 [2024-11-10 15:20:54.598387] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:48.399 [2024-11-10 15:20:54.598706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:11:48.399 [2024-11-10 15:20:54.598852] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:48.399 [2024-11-10 15:20:54.598866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:48.399 [2024-11-10 15:20:54.598987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.399 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.399 [ 00:11:48.399 { 00:11:48.399 "name": "NewBaseBdev", 00:11:48.399 "aliases": [ 00:11:48.399 "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9" 00:11:48.399 ], 00:11:48.399 "product_name": "Malloc disk", 00:11:48.399 "block_size": 512, 00:11:48.399 "num_blocks": 65536, 00:11:48.399 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:48.399 "assigned_rate_limits": { 00:11:48.399 "rw_ios_per_sec": 0, 00:11:48.399 "rw_mbytes_per_sec": 0, 00:11:48.399 "r_mbytes_per_sec": 0, 00:11:48.399 "w_mbytes_per_sec": 0 00:11:48.400 }, 00:11:48.400 "claimed": true, 00:11:48.400 "claim_type": "exclusive_write", 00:11:48.400 "zoned": false, 00:11:48.400 "supported_io_types": { 00:11:48.400 "read": true, 00:11:48.400 "write": true, 00:11:48.400 "unmap": true, 00:11:48.400 "flush": true, 00:11:48.400 "reset": true, 00:11:48.400 "nvme_admin": false, 00:11:48.400 "nvme_io": false, 00:11:48.400 "nvme_io_md": false, 00:11:48.400 "write_zeroes": true, 00:11:48.400 "zcopy": true, 00:11:48.400 "get_zone_info": false, 00:11:48.400 "zone_management": false, 00:11:48.400 "zone_append": false, 00:11:48.400 "compare": false, 00:11:48.400 "compare_and_write": false, 00:11:48.400 "abort": true, 00:11:48.400 "seek_hole": false, 00:11:48.400 "seek_data": false, 00:11:48.400 "copy": true, 00:11:48.400 "nvme_iov_md": false 00:11:48.400 }, 00:11:48.400 "memory_domains": [ 00:11:48.400 { 00:11:48.400 "dma_device_id": "system", 00:11:48.400 "dma_device_type": 1 00:11:48.400 }, 00:11:48.400 { 00:11:48.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.400 "dma_device_type": 2 00:11:48.400 } 00:11:48.400 ], 00:11:48.400 "driver_specific": {} 00:11:48.400 } 00:11:48.400 ] 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.400 "name": "Existed_Raid", 00:11:48.400 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:48.400 "strip_size_kb": 0, 00:11:48.400 "state": "online", 00:11:48.400 "raid_level": "raid1", 00:11:48.400 "superblock": true, 00:11:48.400 "num_base_bdevs": 4, 00:11:48.400 "num_base_bdevs_discovered": 4, 00:11:48.400 "num_base_bdevs_operational": 4, 00:11:48.400 "base_bdevs_list": [ 00:11:48.400 { 00:11:48.400 "name": "NewBaseBdev", 00:11:48.400 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:48.400 "is_configured": true, 00:11:48.400 "data_offset": 2048, 00:11:48.400 "data_size": 63488 00:11:48.400 }, 00:11:48.400 { 00:11:48.400 "name": "BaseBdev2", 00:11:48.400 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:48.400 "is_configured": true, 00:11:48.400 "data_offset": 2048, 00:11:48.400 "data_size": 63488 00:11:48.400 }, 00:11:48.400 { 00:11:48.400 "name": "BaseBdev3", 00:11:48.400 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:48.400 "is_configured": true, 00:11:48.400 "data_offset": 2048, 00:11:48.400 "data_size": 63488 00:11:48.400 }, 00:11:48.400 { 00:11:48.400 "name": "BaseBdev4", 00:11:48.400 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:48.400 "is_configured": true, 00:11:48.400 "data_offset": 2048, 00:11:48.400 "data_size": 63488 00:11:48.400 } 00:11:48.400 ] 00:11:48.400 }' 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.400 15:20:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:48.968 [2024-11-10 15:20:55.106638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:48.968 "name": "Existed_Raid", 00:11:48.968 "aliases": [ 00:11:48.968 "324a73c1-d16f-4bb3-a8b8-25f716880f0f" 00:11:48.968 ], 00:11:48.968 "product_name": "Raid Volume", 00:11:48.968 "block_size": 512, 00:11:48.968 "num_blocks": 63488, 00:11:48.968 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:48.968 "assigned_rate_limits": { 00:11:48.968 "rw_ios_per_sec": 0, 00:11:48.968 "rw_mbytes_per_sec": 0, 00:11:48.968 "r_mbytes_per_sec": 0, 00:11:48.968 "w_mbytes_per_sec": 0 00:11:48.968 }, 00:11:48.968 "claimed": false, 00:11:48.968 "zoned": false, 00:11:48.968 "supported_io_types": { 00:11:48.968 "read": true, 00:11:48.968 "write": true, 00:11:48.968 "unmap": false, 00:11:48.968 "flush": false, 00:11:48.968 "reset": true, 00:11:48.968 "nvme_admin": false, 00:11:48.968 "nvme_io": false, 00:11:48.968 "nvme_io_md": false, 00:11:48.968 "write_zeroes": true, 00:11:48.968 "zcopy": false, 00:11:48.968 "get_zone_info": false, 00:11:48.968 "zone_management": false, 00:11:48.968 "zone_append": false, 00:11:48.968 "compare": false, 00:11:48.968 "compare_and_write": false, 00:11:48.968 "abort": false, 00:11:48.968 "seek_hole": false, 00:11:48.968 "seek_data": false, 00:11:48.968 "copy": false, 00:11:48.968 "nvme_iov_md": false 00:11:48.968 }, 00:11:48.968 "memory_domains": [ 00:11:48.968 { 00:11:48.968 "dma_device_id": "system", 00:11:48.968 "dma_device_type": 1 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.968 "dma_device_type": 2 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "system", 00:11:48.968 "dma_device_type": 1 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.968 "dma_device_type": 2 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "system", 00:11:48.968 "dma_device_type": 1 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.968 "dma_device_type": 2 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "system", 00:11:48.968 "dma_device_type": 1 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.968 "dma_device_type": 2 00:11:48.968 } 00:11:48.968 ], 00:11:48.968 "driver_specific": { 00:11:48.968 "raid": { 00:11:48.968 "uuid": "324a73c1-d16f-4bb3-a8b8-25f716880f0f", 00:11:48.968 "strip_size_kb": 0, 00:11:48.968 "state": "online", 00:11:48.968 "raid_level": "raid1", 00:11:48.968 "superblock": true, 00:11:48.968 "num_base_bdevs": 4, 00:11:48.968 "num_base_bdevs_discovered": 4, 00:11:48.968 "num_base_bdevs_operational": 4, 00:11:48.968 "base_bdevs_list": [ 00:11:48.968 { 00:11:48.968 "name": "NewBaseBdev", 00:11:48.968 "uuid": "fe3b3eb8-7b10-43d2-aba3-ea65c5d2b4a9", 00:11:48.968 "is_configured": true, 00:11:48.968 "data_offset": 2048, 00:11:48.968 "data_size": 63488 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "name": "BaseBdev2", 00:11:48.968 "uuid": "f27d2a4d-5a63-47ac-b7fb-51dd60e0e57a", 00:11:48.968 "is_configured": true, 00:11:48.968 "data_offset": 2048, 00:11:48.968 "data_size": 63488 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "name": "BaseBdev3", 00:11:48.968 "uuid": "11850787-238e-43df-97c9-f5251312ed17", 00:11:48.968 "is_configured": true, 00:11:48.968 "data_offset": 2048, 00:11:48.968 "data_size": 63488 00:11:48.968 }, 00:11:48.968 { 00:11:48.968 "name": "BaseBdev4", 00:11:48.968 "uuid": "ccdef0ae-e6e3-4d3e-a159-22e28d758beb", 00:11:48.968 "is_configured": true, 00:11:48.968 "data_offset": 2048, 00:11:48.968 "data_size": 63488 00:11:48.968 } 00:11:48.968 ] 00:11:48.968 } 00:11:48.968 } 00:11:48.968 }' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:48.968 BaseBdev2 00:11:48.968 BaseBdev3 00:11:48.968 BaseBdev4' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:48.968 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.229 [2024-11-10 15:20:55.426308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.229 [2024-11-10 15:20:55.426439] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.229 [2024-11-10 15:20:55.426557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.229 [2024-11-10 15:20:55.426905] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.229 [2024-11-10 15:20:55.426967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 86014 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 86014 ']' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 86014 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86014 00:11:49.229 killing process with pid 86014 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86014' 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 86014 00:11:49.229 [2024-11-10 15:20:55.472731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.229 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 86014 00:11:49.229 [2024-11-10 15:20:55.552268] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:49.799 15:20:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:49.799 00:11:49.799 real 0m9.705s 00:11:49.799 user 0m16.234s 00:11:49.799 sys 0m2.139s 00:11:49.799 ************************************ 00:11:49.799 END TEST raid_state_function_test_sb 00:11:49.799 ************************************ 00:11:49.799 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:49.799 15:20:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.799 15:20:55 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:49.799 15:20:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:49.799 15:20:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.799 15:20:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:49.799 ************************************ 00:11:49.799 START TEST raid_superblock_test 00:11:49.799 ************************************ 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 4 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=86668 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 86668 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 86668 ']' 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:49.799 15:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.799 [2024-11-10 15:20:56.042121] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:49.799 [2024-11-10 15:20:56.042337] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86668 ] 00:11:50.058 [2024-11-10 15:20:56.173456] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.058 [2024-11-10 15:20:56.192702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.058 [2024-11-10 15:20:56.232730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.058 [2024-11-10 15:20:56.310126] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.058 [2024-11-10 15:20:56.310262] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 malloc1 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.627 [2024-11-10 15:20:56.934188] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:50.627 [2024-11-10 15:20:56.934345] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.627 [2024-11-10 15:20:56.934397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:50.627 [2024-11-10 15:20:56.934434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.627 [2024-11-10 15:20:56.937133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.627 [2024-11-10 15:20:56.937205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:50.627 pt1 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:50.627 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.628 malloc2 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.628 [2024-11-10 15:20:56.973006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:50.628 [2024-11-10 15:20:56.973073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.628 [2024-11-10 15:20:56.973094] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:50.628 [2024-11-10 15:20:56.973103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.628 [2024-11-10 15:20:56.975731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.628 [2024-11-10 15:20:56.975769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:50.628 pt2 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.628 15:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.888 malloc3 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.888 [2024-11-10 15:20:57.008207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:50.888 [2024-11-10 15:20:57.008338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.888 [2024-11-10 15:20:57.008382] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:50.888 [2024-11-10 15:20:57.008415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.888 [2024-11-10 15:20:57.010931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.888 [2024-11-10 15:20:57.011002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:50.888 pt3 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.888 malloc4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.888 [2024-11-10 15:20:57.057453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:50.888 [2024-11-10 15:20:57.057566] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.888 [2024-11-10 15:20:57.057610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:50.888 [2024-11-10 15:20:57.057642] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.888 [2024-11-10 15:20:57.060349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.888 [2024-11-10 15:20:57.060424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:50.888 pt4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.888 [2024-11-10 15:20:57.069529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:50.888 [2024-11-10 15:20:57.072090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:50.888 [2024-11-10 15:20:57.072221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.888 [2024-11-10 15:20:57.072294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:50.888 [2024-11-10 15:20:57.072527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:11:50.888 [2024-11-10 15:20:57.072592] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.888 [2024-11-10 15:20:57.072931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:11:50.888 [2024-11-10 15:20:57.073179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:11:50.888 [2024-11-10 15:20:57.073235] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:11:50.888 [2024-11-10 15:20:57.073471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.888 "name": "raid_bdev1", 00:11:50.888 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:50.888 "strip_size_kb": 0, 00:11:50.888 "state": "online", 00:11:50.888 "raid_level": "raid1", 00:11:50.888 "superblock": true, 00:11:50.888 "num_base_bdevs": 4, 00:11:50.888 "num_base_bdevs_discovered": 4, 00:11:50.888 "num_base_bdevs_operational": 4, 00:11:50.888 "base_bdevs_list": [ 00:11:50.888 { 00:11:50.888 "name": "pt1", 00:11:50.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:50.888 "is_configured": true, 00:11:50.888 "data_offset": 2048, 00:11:50.888 "data_size": 63488 00:11:50.888 }, 00:11:50.888 { 00:11:50.888 "name": "pt2", 00:11:50.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:50.888 "is_configured": true, 00:11:50.888 "data_offset": 2048, 00:11:50.888 "data_size": 63488 00:11:50.888 }, 00:11:50.888 { 00:11:50.888 "name": "pt3", 00:11:50.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:50.888 "is_configured": true, 00:11:50.888 "data_offset": 2048, 00:11:50.888 "data_size": 63488 00:11:50.888 }, 00:11:50.888 { 00:11:50.888 "name": "pt4", 00:11:50.888 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:50.888 "is_configured": true, 00:11:50.888 "data_offset": 2048, 00:11:50.888 "data_size": 63488 00:11:50.888 } 00:11:50.888 ] 00:11:50.888 }' 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.888 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.457 [2024-11-10 15:20:57.566203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.457 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.457 "name": "raid_bdev1", 00:11:51.457 "aliases": [ 00:11:51.457 "3a95b03a-9776-494e-b71f-d8c5d5c52e59" 00:11:51.457 ], 00:11:51.457 "product_name": "Raid Volume", 00:11:51.457 "block_size": 512, 00:11:51.457 "num_blocks": 63488, 00:11:51.457 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:51.457 "assigned_rate_limits": { 00:11:51.457 "rw_ios_per_sec": 0, 00:11:51.457 "rw_mbytes_per_sec": 0, 00:11:51.457 "r_mbytes_per_sec": 0, 00:11:51.457 "w_mbytes_per_sec": 0 00:11:51.457 }, 00:11:51.457 "claimed": false, 00:11:51.457 "zoned": false, 00:11:51.457 "supported_io_types": { 00:11:51.457 "read": true, 00:11:51.457 "write": true, 00:11:51.457 "unmap": false, 00:11:51.457 "flush": false, 00:11:51.457 "reset": true, 00:11:51.457 "nvme_admin": false, 00:11:51.457 "nvme_io": false, 00:11:51.457 "nvme_io_md": false, 00:11:51.457 "write_zeroes": true, 00:11:51.457 "zcopy": false, 00:11:51.457 "get_zone_info": false, 00:11:51.457 "zone_management": false, 00:11:51.457 "zone_append": false, 00:11:51.457 "compare": false, 00:11:51.457 "compare_and_write": false, 00:11:51.457 "abort": false, 00:11:51.457 "seek_hole": false, 00:11:51.457 "seek_data": false, 00:11:51.457 "copy": false, 00:11:51.457 "nvme_iov_md": false 00:11:51.457 }, 00:11:51.457 "memory_domains": [ 00:11:51.457 { 00:11:51.457 "dma_device_id": "system", 00:11:51.457 "dma_device_type": 1 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.457 "dma_device_type": 2 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "system", 00:11:51.457 "dma_device_type": 1 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.457 "dma_device_type": 2 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "system", 00:11:51.457 "dma_device_type": 1 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.457 "dma_device_type": 2 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "system", 00:11:51.457 "dma_device_type": 1 00:11:51.457 }, 00:11:51.457 { 00:11:51.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.457 "dma_device_type": 2 00:11:51.457 } 00:11:51.457 ], 00:11:51.457 "driver_specific": { 00:11:51.457 "raid": { 00:11:51.457 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:51.457 "strip_size_kb": 0, 00:11:51.457 "state": "online", 00:11:51.457 "raid_level": "raid1", 00:11:51.457 "superblock": true, 00:11:51.457 "num_base_bdevs": 4, 00:11:51.457 "num_base_bdevs_discovered": 4, 00:11:51.457 "num_base_bdevs_operational": 4, 00:11:51.457 "base_bdevs_list": [ 00:11:51.457 { 00:11:51.457 "name": "pt1", 00:11:51.458 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.458 "is_configured": true, 00:11:51.458 "data_offset": 2048, 00:11:51.458 "data_size": 63488 00:11:51.458 }, 00:11:51.458 { 00:11:51.458 "name": "pt2", 00:11:51.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.458 "is_configured": true, 00:11:51.458 "data_offset": 2048, 00:11:51.458 "data_size": 63488 00:11:51.458 }, 00:11:51.458 { 00:11:51.458 "name": "pt3", 00:11:51.458 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.458 "is_configured": true, 00:11:51.458 "data_offset": 2048, 00:11:51.458 "data_size": 63488 00:11:51.458 }, 00:11:51.458 { 00:11:51.458 "name": "pt4", 00:11:51.458 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.458 "is_configured": true, 00:11:51.458 "data_offset": 2048, 00:11:51.458 "data_size": 63488 00:11:51.458 } 00:11:51.458 ] 00:11:51.458 } 00:11:51.458 } 00:11:51.458 }' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:51.458 pt2 00:11:51.458 pt3 00:11:51.458 pt4' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.458 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 [2024-11-10 15:20:57.918170] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3a95b03a-9776-494e-b71f-d8c5d5c52e59 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3a95b03a-9776-494e-b71f-d8c5d5c52e59 ']' 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 [2024-11-10 15:20:57.961798] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.718 [2024-11-10 15:20:57.961836] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.718 [2024-11-10 15:20:57.961934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.718 [2024-11-10 15:20:57.962080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.718 [2024-11-10 15:20:57.962097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.718 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.978 [2024-11-10 15:20:58.122012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:51.978 [2024-11-10 15:20:58.124391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:51.978 [2024-11-10 15:20:58.124491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:51.978 [2024-11-10 15:20:58.124529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:51.978 [2024-11-10 15:20:58.124585] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:51.978 [2024-11-10 15:20:58.124642] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:51.978 [2024-11-10 15:20:58.124661] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:51.978 [2024-11-10 15:20:58.124679] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:51.978 [2024-11-10 15:20:58.124692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.978 [2024-11-10 15:20:58.124704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:11:51.978 request: 00:11:51.978 { 00:11:51.978 "name": "raid_bdev1", 00:11:51.978 "raid_level": "raid1", 00:11:51.978 "base_bdevs": [ 00:11:51.978 "malloc1", 00:11:51.978 "malloc2", 00:11:51.978 "malloc3", 00:11:51.978 "malloc4" 00:11:51.978 ], 00:11:51.978 "superblock": false, 00:11:51.978 "method": "bdev_raid_create", 00:11:51.978 "req_id": 1 00:11:51.978 } 00:11:51.978 Got JSON-RPC error response 00:11:51.978 response: 00:11:51.978 { 00:11:51.978 "code": -17, 00:11:51.978 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:51.978 } 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.978 [2024-11-10 15:20:58.185920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:51.978 [2024-11-10 15:20:58.186084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.978 [2024-11-10 15:20:58.186123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:51.978 [2024-11-10 15:20:58.186168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.978 [2024-11-10 15:20:58.188713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.978 [2024-11-10 15:20:58.188795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:51.978 [2024-11-10 15:20:58.188912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:51.978 [2024-11-10 15:20:58.189004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:51.978 pt1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.978 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.979 "name": "raid_bdev1", 00:11:51.979 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:51.979 "strip_size_kb": 0, 00:11:51.979 "state": "configuring", 00:11:51.979 "raid_level": "raid1", 00:11:51.979 "superblock": true, 00:11:51.979 "num_base_bdevs": 4, 00:11:51.979 "num_base_bdevs_discovered": 1, 00:11:51.979 "num_base_bdevs_operational": 4, 00:11:51.979 "base_bdevs_list": [ 00:11:51.979 { 00:11:51.979 "name": "pt1", 00:11:51.979 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:51.979 "is_configured": true, 00:11:51.979 "data_offset": 2048, 00:11:51.979 "data_size": 63488 00:11:51.979 }, 00:11:51.979 { 00:11:51.979 "name": null, 00:11:51.979 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:51.979 "is_configured": false, 00:11:51.979 "data_offset": 2048, 00:11:51.979 "data_size": 63488 00:11:51.979 }, 00:11:51.979 { 00:11:51.979 "name": null, 00:11:51.979 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:51.979 "is_configured": false, 00:11:51.979 "data_offset": 2048, 00:11:51.979 "data_size": 63488 00:11:51.979 }, 00:11:51.979 { 00:11:51.979 "name": null, 00:11:51.979 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:51.979 "is_configured": false, 00:11:51.979 "data_offset": 2048, 00:11:51.979 "data_size": 63488 00:11:51.979 } 00:11:51.979 ] 00:11:51.979 }' 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.979 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.551 [2024-11-10 15:20:58.650076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.551 [2024-11-10 15:20:58.650174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.551 [2024-11-10 15:20:58.650199] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:52.551 [2024-11-10 15:20:58.650211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.551 [2024-11-10 15:20:58.650689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.551 [2024-11-10 15:20:58.650709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.551 [2024-11-10 15:20:58.650797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:52.551 [2024-11-10 15:20:58.650829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.551 pt2 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.551 [2024-11-10 15:20:58.662010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.551 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.551 "name": "raid_bdev1", 00:11:52.551 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:52.551 "strip_size_kb": 0, 00:11:52.551 "state": "configuring", 00:11:52.551 "raid_level": "raid1", 00:11:52.551 "superblock": true, 00:11:52.551 "num_base_bdevs": 4, 00:11:52.551 "num_base_bdevs_discovered": 1, 00:11:52.551 "num_base_bdevs_operational": 4, 00:11:52.551 "base_bdevs_list": [ 00:11:52.551 { 00:11:52.551 "name": "pt1", 00:11:52.551 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.551 "is_configured": true, 00:11:52.551 "data_offset": 2048, 00:11:52.551 "data_size": 63488 00:11:52.551 }, 00:11:52.551 { 00:11:52.551 "name": null, 00:11:52.551 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.551 "is_configured": false, 00:11:52.551 "data_offset": 0, 00:11:52.551 "data_size": 63488 00:11:52.551 }, 00:11:52.551 { 00:11:52.551 "name": null, 00:11:52.551 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.551 "is_configured": false, 00:11:52.551 "data_offset": 2048, 00:11:52.551 "data_size": 63488 00:11:52.552 }, 00:11:52.552 { 00:11:52.552 "name": null, 00:11:52.552 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.552 "is_configured": false, 00:11:52.552 "data_offset": 2048, 00:11:52.552 "data_size": 63488 00:11:52.552 } 00:11:52.552 ] 00:11:52.552 }' 00:11:52.552 15:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.552 15:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.818 [2024-11-10 15:20:59.054192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.818 [2024-11-10 15:20:59.054409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.818 [2024-11-10 15:20:59.054465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:52.818 [2024-11-10 15:20:59.054518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.818 [2024-11-10 15:20:59.055126] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.818 [2024-11-10 15:20:59.055209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.818 [2024-11-10 15:20:59.055374] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:52.818 [2024-11-10 15:20:59.055446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.818 pt2 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.818 [2024-11-10 15:20:59.066143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:52.818 [2024-11-10 15:20:59.066194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.818 [2024-11-10 15:20:59.066215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:52.818 [2024-11-10 15:20:59.066224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.818 [2024-11-10 15:20:59.066640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.818 [2024-11-10 15:20:59.066657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:52.818 [2024-11-10 15:20:59.066725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:52.818 [2024-11-10 15:20:59.066745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:52.818 pt3 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.818 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.818 [2024-11-10 15:20:59.078133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:52.818 [2024-11-10 15:20:59.078182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.818 [2024-11-10 15:20:59.078202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:52.818 [2024-11-10 15:20:59.078211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.818 [2024-11-10 15:20:59.078624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.818 [2024-11-10 15:20:59.078640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:52.818 [2024-11-10 15:20:59.078708] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:52.819 [2024-11-10 15:20:59.078727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:52.819 [2024-11-10 15:20:59.078860] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:11:52.819 [2024-11-10 15:20:59.078870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:52.819 [2024-11-10 15:20:59.079154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:52.819 [2024-11-10 15:20:59.079289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:11:52.819 [2024-11-10 15:20:59.079311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:11:52.819 [2024-11-10 15:20:59.079441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.819 pt4 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.819 "name": "raid_bdev1", 00:11:52.819 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:52.819 "strip_size_kb": 0, 00:11:52.819 "state": "online", 00:11:52.819 "raid_level": "raid1", 00:11:52.819 "superblock": true, 00:11:52.819 "num_base_bdevs": 4, 00:11:52.819 "num_base_bdevs_discovered": 4, 00:11:52.819 "num_base_bdevs_operational": 4, 00:11:52.819 "base_bdevs_list": [ 00:11:52.819 { 00:11:52.819 "name": "pt1", 00:11:52.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:52.819 "is_configured": true, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 }, 00:11:52.819 { 00:11:52.819 "name": "pt2", 00:11:52.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:52.819 "is_configured": true, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 }, 00:11:52.819 { 00:11:52.819 "name": "pt3", 00:11:52.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:52.819 "is_configured": true, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 }, 00:11:52.819 { 00:11:52.819 "name": "pt4", 00:11:52.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:52.819 "is_configured": true, 00:11:52.819 "data_offset": 2048, 00:11:52.819 "data_size": 63488 00:11:52.819 } 00:11:52.819 ] 00:11:52.819 }' 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.819 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.388 [2024-11-10 15:20:59.542573] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:53.388 "name": "raid_bdev1", 00:11:53.388 "aliases": [ 00:11:53.388 "3a95b03a-9776-494e-b71f-d8c5d5c52e59" 00:11:53.388 ], 00:11:53.388 "product_name": "Raid Volume", 00:11:53.388 "block_size": 512, 00:11:53.388 "num_blocks": 63488, 00:11:53.388 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:53.388 "assigned_rate_limits": { 00:11:53.388 "rw_ios_per_sec": 0, 00:11:53.388 "rw_mbytes_per_sec": 0, 00:11:53.388 "r_mbytes_per_sec": 0, 00:11:53.388 "w_mbytes_per_sec": 0 00:11:53.388 }, 00:11:53.388 "claimed": false, 00:11:53.388 "zoned": false, 00:11:53.388 "supported_io_types": { 00:11:53.388 "read": true, 00:11:53.388 "write": true, 00:11:53.388 "unmap": false, 00:11:53.388 "flush": false, 00:11:53.388 "reset": true, 00:11:53.388 "nvme_admin": false, 00:11:53.388 "nvme_io": false, 00:11:53.388 "nvme_io_md": false, 00:11:53.388 "write_zeroes": true, 00:11:53.388 "zcopy": false, 00:11:53.388 "get_zone_info": false, 00:11:53.388 "zone_management": false, 00:11:53.388 "zone_append": false, 00:11:53.388 "compare": false, 00:11:53.388 "compare_and_write": false, 00:11:53.388 "abort": false, 00:11:53.388 "seek_hole": false, 00:11:53.388 "seek_data": false, 00:11:53.388 "copy": false, 00:11:53.388 "nvme_iov_md": false 00:11:53.388 }, 00:11:53.388 "memory_domains": [ 00:11:53.388 { 00:11:53.388 "dma_device_id": "system", 00:11:53.388 "dma_device_type": 1 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.388 "dma_device_type": 2 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "system", 00:11:53.388 "dma_device_type": 1 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.388 "dma_device_type": 2 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "system", 00:11:53.388 "dma_device_type": 1 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.388 "dma_device_type": 2 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "system", 00:11:53.388 "dma_device_type": 1 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.388 "dma_device_type": 2 00:11:53.388 } 00:11:53.388 ], 00:11:53.388 "driver_specific": { 00:11:53.388 "raid": { 00:11:53.388 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:53.388 "strip_size_kb": 0, 00:11:53.388 "state": "online", 00:11:53.388 "raid_level": "raid1", 00:11:53.388 "superblock": true, 00:11:53.388 "num_base_bdevs": 4, 00:11:53.388 "num_base_bdevs_discovered": 4, 00:11:53.388 "num_base_bdevs_operational": 4, 00:11:53.388 "base_bdevs_list": [ 00:11:53.388 { 00:11:53.388 "name": "pt1", 00:11:53.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:53.388 "is_configured": true, 00:11:53.388 "data_offset": 2048, 00:11:53.388 "data_size": 63488 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "name": "pt2", 00:11:53.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.388 "is_configured": true, 00:11:53.388 "data_offset": 2048, 00:11:53.388 "data_size": 63488 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "name": "pt3", 00:11:53.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.388 "is_configured": true, 00:11:53.388 "data_offset": 2048, 00:11:53.388 "data_size": 63488 00:11:53.388 }, 00:11:53.388 { 00:11:53.388 "name": "pt4", 00:11:53.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.388 "is_configured": true, 00:11:53.388 "data_offset": 2048, 00:11:53.388 "data_size": 63488 00:11:53.388 } 00:11:53.388 ] 00:11:53.388 } 00:11:53.388 } 00:11:53.388 }' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:53.388 pt2 00:11:53.388 pt3 00:11:53.388 pt4' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.388 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.647 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:53.648 [2024-11-10 15:20:59.850755] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3a95b03a-9776-494e-b71f-d8c5d5c52e59 '!=' 3a95b03a-9776-494e-b71f-d8c5d5c52e59 ']' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.648 [2024-11-10 15:20:59.894503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.648 "name": "raid_bdev1", 00:11:53.648 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:53.648 "strip_size_kb": 0, 00:11:53.648 "state": "online", 00:11:53.648 "raid_level": "raid1", 00:11:53.648 "superblock": true, 00:11:53.648 "num_base_bdevs": 4, 00:11:53.648 "num_base_bdevs_discovered": 3, 00:11:53.648 "num_base_bdevs_operational": 3, 00:11:53.648 "base_bdevs_list": [ 00:11:53.648 { 00:11:53.648 "name": null, 00:11:53.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.648 "is_configured": false, 00:11:53.648 "data_offset": 0, 00:11:53.648 "data_size": 63488 00:11:53.648 }, 00:11:53.648 { 00:11:53.648 "name": "pt2", 00:11:53.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:53.648 "is_configured": true, 00:11:53.648 "data_offset": 2048, 00:11:53.648 "data_size": 63488 00:11:53.648 }, 00:11:53.648 { 00:11:53.648 "name": "pt3", 00:11:53.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:53.648 "is_configured": true, 00:11:53.648 "data_offset": 2048, 00:11:53.648 "data_size": 63488 00:11:53.648 }, 00:11:53.648 { 00:11:53.648 "name": "pt4", 00:11:53.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:53.648 "is_configured": true, 00:11:53.648 "data_offset": 2048, 00:11:53.648 "data_size": 63488 00:11:53.648 } 00:11:53.648 ] 00:11:53.648 }' 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.648 15:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 [2024-11-10 15:21:00.326532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.244 [2024-11-10 15:21:00.326589] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.244 [2024-11-10 15:21:00.326703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.244 [2024-11-10 15:21:00.326786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.244 [2024-11-10 15:21:00.326797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 [2024-11-10 15:21:00.394484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:54.244 [2024-11-10 15:21:00.394613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.244 [2024-11-10 15:21:00.394640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:54.244 [2024-11-10 15:21:00.394649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.244 [2024-11-10 15:21:00.397248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.244 [2024-11-10 15:21:00.397285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:54.244 [2024-11-10 15:21:00.397369] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:54.244 [2024-11-10 15:21:00.397410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:54.244 pt2 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.244 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.244 "name": "raid_bdev1", 00:11:54.244 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:54.244 "strip_size_kb": 0, 00:11:54.244 "state": "configuring", 00:11:54.244 "raid_level": "raid1", 00:11:54.244 "superblock": true, 00:11:54.244 "num_base_bdevs": 4, 00:11:54.244 "num_base_bdevs_discovered": 1, 00:11:54.245 "num_base_bdevs_operational": 3, 00:11:54.245 "base_bdevs_list": [ 00:11:54.245 { 00:11:54.245 "name": null, 00:11:54.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.245 "is_configured": false, 00:11:54.245 "data_offset": 2048, 00:11:54.245 "data_size": 63488 00:11:54.245 }, 00:11:54.245 { 00:11:54.245 "name": "pt2", 00:11:54.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.245 "is_configured": true, 00:11:54.245 "data_offset": 2048, 00:11:54.245 "data_size": 63488 00:11:54.245 }, 00:11:54.245 { 00:11:54.245 "name": null, 00:11:54.245 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.245 "is_configured": false, 00:11:54.245 "data_offset": 2048, 00:11:54.245 "data_size": 63488 00:11:54.245 }, 00:11:54.245 { 00:11:54.245 "name": null, 00:11:54.245 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.245 "is_configured": false, 00:11:54.245 "data_offset": 2048, 00:11:54.245 "data_size": 63488 00:11:54.245 } 00:11:54.245 ] 00:11:54.245 }' 00:11:54.245 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.245 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.504 [2024-11-10 15:21:00.830652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:54.504 [2024-11-10 15:21:00.830799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.504 [2024-11-10 15:21:00.830830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:54.504 [2024-11-10 15:21:00.830840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.504 [2024-11-10 15:21:00.831308] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.504 [2024-11-10 15:21:00.831343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:54.504 [2024-11-10 15:21:00.831429] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:54.504 [2024-11-10 15:21:00.831452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:54.504 pt3 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.504 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.762 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.762 "name": "raid_bdev1", 00:11:54.762 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:54.762 "strip_size_kb": 0, 00:11:54.762 "state": "configuring", 00:11:54.762 "raid_level": "raid1", 00:11:54.762 "superblock": true, 00:11:54.762 "num_base_bdevs": 4, 00:11:54.762 "num_base_bdevs_discovered": 2, 00:11:54.762 "num_base_bdevs_operational": 3, 00:11:54.762 "base_bdevs_list": [ 00:11:54.762 { 00:11:54.762 "name": null, 00:11:54.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.762 "is_configured": false, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": "pt2", 00:11:54.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": "pt3", 00:11:54.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:54.762 "is_configured": true, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 }, 00:11:54.762 { 00:11:54.762 "name": null, 00:11:54.762 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:54.762 "is_configured": false, 00:11:54.762 "data_offset": 2048, 00:11:54.762 "data_size": 63488 00:11:54.762 } 00:11:54.762 ] 00:11:54.762 }' 00:11:54.762 15:21:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.762 15:21:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.020 [2024-11-10 15:21:01.230794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:55.020 [2024-11-10 15:21:01.230880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.020 [2024-11-10 15:21:01.230907] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:55.020 [2024-11-10 15:21:01.230918] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.020 [2024-11-10 15:21:01.231485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.020 [2024-11-10 15:21:01.231506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:55.020 [2024-11-10 15:21:01.231596] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:55.020 [2024-11-10 15:21:01.231629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:55.020 [2024-11-10 15:21:01.231762] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:55.020 [2024-11-10 15:21:01.231772] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.020 [2024-11-10 15:21:01.232084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:11:55.020 [2024-11-10 15:21:01.232243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:55.020 [2024-11-10 15:21:01.232258] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:55.020 [2024-11-10 15:21:01.232384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.020 pt4 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.020 "name": "raid_bdev1", 00:11:55.020 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:55.020 "strip_size_kb": 0, 00:11:55.020 "state": "online", 00:11:55.020 "raid_level": "raid1", 00:11:55.020 "superblock": true, 00:11:55.020 "num_base_bdevs": 4, 00:11:55.020 "num_base_bdevs_discovered": 3, 00:11:55.020 "num_base_bdevs_operational": 3, 00:11:55.020 "base_bdevs_list": [ 00:11:55.020 { 00:11:55.020 "name": null, 00:11:55.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.020 "is_configured": false, 00:11:55.020 "data_offset": 2048, 00:11:55.020 "data_size": 63488 00:11:55.020 }, 00:11:55.020 { 00:11:55.020 "name": "pt2", 00:11:55.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.020 "is_configured": true, 00:11:55.020 "data_offset": 2048, 00:11:55.020 "data_size": 63488 00:11:55.020 }, 00:11:55.020 { 00:11:55.020 "name": "pt3", 00:11:55.020 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.020 "is_configured": true, 00:11:55.020 "data_offset": 2048, 00:11:55.020 "data_size": 63488 00:11:55.020 }, 00:11:55.020 { 00:11:55.020 "name": "pt4", 00:11:55.020 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.020 "is_configured": true, 00:11:55.020 "data_offset": 2048, 00:11:55.020 "data_size": 63488 00:11:55.020 } 00:11:55.020 ] 00:11:55.020 }' 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.020 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.279 [2024-11-10 15:21:01.614885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.279 [2024-11-10 15:21:01.615024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.279 [2024-11-10 15:21:01.615137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.279 [2024-11-10 15:21:01.615237] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.279 [2024-11-10 15:21:01.615316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.279 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.538 [2024-11-10 15:21:01.682880] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:55.538 [2024-11-10 15:21:01.682997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.538 [2024-11-10 15:21:01.683051] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:55.538 [2024-11-10 15:21:01.683104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.538 [2024-11-10 15:21:01.685732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.538 [2024-11-10 15:21:01.685809] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:55.538 [2024-11-10 15:21:01.685909] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:55.538 [2024-11-10 15:21:01.685974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:55.538 [2024-11-10 15:21:01.686116] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:55.538 [2024-11-10 15:21:01.686193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.538 [2024-11-10 15:21:01.686237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:11:55.538 [2024-11-10 15:21:01.686319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:55.538 [2024-11-10 15:21:01.686465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:55.538 pt1 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.538 "name": "raid_bdev1", 00:11:55.538 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:55.538 "strip_size_kb": 0, 00:11:55.538 "state": "configuring", 00:11:55.538 "raid_level": "raid1", 00:11:55.538 "superblock": true, 00:11:55.538 "num_base_bdevs": 4, 00:11:55.538 "num_base_bdevs_discovered": 2, 00:11:55.538 "num_base_bdevs_operational": 3, 00:11:55.538 "base_bdevs_list": [ 00:11:55.538 { 00:11:55.538 "name": null, 00:11:55.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.538 "is_configured": false, 00:11:55.538 "data_offset": 2048, 00:11:55.538 "data_size": 63488 00:11:55.538 }, 00:11:55.538 { 00:11:55.538 "name": "pt2", 00:11:55.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:55.538 "is_configured": true, 00:11:55.538 "data_offset": 2048, 00:11:55.538 "data_size": 63488 00:11:55.538 }, 00:11:55.538 { 00:11:55.538 "name": "pt3", 00:11:55.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:55.538 "is_configured": true, 00:11:55.538 "data_offset": 2048, 00:11:55.538 "data_size": 63488 00:11:55.538 }, 00:11:55.538 { 00:11:55.538 "name": null, 00:11:55.538 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:55.538 "is_configured": false, 00:11:55.538 "data_offset": 2048, 00:11:55.538 "data_size": 63488 00:11:55.538 } 00:11:55.538 ] 00:11:55.538 }' 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.538 15:21:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.796 [2024-11-10 15:21:02.147008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:55.796 [2024-11-10 15:21:02.147191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.796 [2024-11-10 15:21:02.147222] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:55.796 [2024-11-10 15:21:02.147233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.796 [2024-11-10 15:21:02.147777] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.796 [2024-11-10 15:21:02.147798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:55.796 [2024-11-10 15:21:02.147889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:55.796 [2024-11-10 15:21:02.147914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:55.796 [2024-11-10 15:21:02.148067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:55.796 [2024-11-10 15:21:02.148078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:55.796 [2024-11-10 15:21:02.148383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:55.796 [2024-11-10 15:21:02.148517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:55.796 [2024-11-10 15:21:02.148598] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:55.796 [2024-11-10 15:21:02.148733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:55.796 pt4 00:11:55.796 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.797 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.056 "name": "raid_bdev1", 00:11:56.056 "uuid": "3a95b03a-9776-494e-b71f-d8c5d5c52e59", 00:11:56.056 "strip_size_kb": 0, 00:11:56.056 "state": "online", 00:11:56.056 "raid_level": "raid1", 00:11:56.056 "superblock": true, 00:11:56.056 "num_base_bdevs": 4, 00:11:56.056 "num_base_bdevs_discovered": 3, 00:11:56.056 "num_base_bdevs_operational": 3, 00:11:56.056 "base_bdevs_list": [ 00:11:56.056 { 00:11:56.056 "name": null, 00:11:56.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.056 "is_configured": false, 00:11:56.056 "data_offset": 2048, 00:11:56.056 "data_size": 63488 00:11:56.056 }, 00:11:56.056 { 00:11:56.056 "name": "pt2", 00:11:56.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:56.056 "is_configured": true, 00:11:56.056 "data_offset": 2048, 00:11:56.056 "data_size": 63488 00:11:56.056 }, 00:11:56.056 { 00:11:56.056 "name": "pt3", 00:11:56.056 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:56.056 "is_configured": true, 00:11:56.056 "data_offset": 2048, 00:11:56.056 "data_size": 63488 00:11:56.056 }, 00:11:56.056 { 00:11:56.056 "name": "pt4", 00:11:56.056 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:56.056 "is_configured": true, 00:11:56.056 "data_offset": 2048, 00:11:56.056 "data_size": 63488 00:11:56.056 } 00:11:56.056 ] 00:11:56.056 }' 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.056 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.315 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.315 [2024-11-10 15:21:02.583485] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3a95b03a-9776-494e-b71f-d8c5d5c52e59 '!=' 3a95b03a-9776-494e-b71f-d8c5d5c52e59 ']' 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 86668 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 86668 ']' 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # kill -0 86668 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # uname 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 86668 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 86668' 00:11:56.316 killing process with pid 86668 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@971 -- # kill 86668 00:11:56.316 [2024-11-10 15:21:02.655107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:56.316 [2024-11-10 15:21:02.655312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:56.316 15:21:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@976 -- # wait 86668 00:11:56.316 [2024-11-10 15:21:02.655423] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:56.316 [2024-11-10 15:21:02.655438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:56.577 [2024-11-10 15:21:02.738370] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:56.836 15:21:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:56.836 00:11:56.836 real 0m7.111s 00:11:56.836 user 0m11.710s 00:11:56.836 sys 0m1.587s 00:11:56.836 ************************************ 00:11:56.836 END TEST raid_superblock_test 00:11:56.836 ************************************ 00:11:56.836 15:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.836 15:21:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.836 15:21:03 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:56.836 15:21:03 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:11:56.836 15:21:03 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.836 15:21:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:56.836 ************************************ 00:11:56.836 START TEST raid_read_error_test 00:11:56.836 ************************************ 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 read 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ePqjhSCp8V 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87146 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87146 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # '[' -z 87146 ']' 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.836 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:56.837 15:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.096 [2024-11-10 15:21:03.268482] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:11:57.096 [2024-11-10 15:21:03.268720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87146 ] 00:11:57.096 [2024-11-10 15:21:03.407041] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:57.096 [2024-11-10 15:21:03.445874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.355 [2024-11-10 15:21:03.488114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.355 [2024-11-10 15:21:03.564005] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.355 [2024-11-10 15:21:03.564085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@866 -- # return 0 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 BaseBdev1_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 true 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 [2024-11-10 15:21:04.111138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:57.924 [2024-11-10 15:21:04.111206] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.924 [2024-11-10 15:21:04.111231] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:57.924 [2024-11-10 15:21:04.111254] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.924 [2024-11-10 15:21:04.113813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.924 [2024-11-10 15:21:04.113853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:57.924 BaseBdev1 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 BaseBdev2_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 true 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 [2024-11-10 15:21:04.157938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:57.924 [2024-11-10 15:21:04.158001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.924 [2024-11-10 15:21:04.158031] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:57.924 [2024-11-10 15:21:04.158044] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.924 [2024-11-10 15:21:04.160555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.924 [2024-11-10 15:21:04.160594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:57.924 BaseBdev2 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 BaseBdev3_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 true 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 [2024-11-10 15:21:04.204666] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:57.924 [2024-11-10 15:21:04.204814] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.924 [2024-11-10 15:21:04.204834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:57.924 [2024-11-10 15:21:04.204846] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.924 [2024-11-10 15:21:04.207263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.924 [2024-11-10 15:21:04.207308] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:57.924 BaseBdev3 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.924 BaseBdev4_malloc 00:11:57.924 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.925 true 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.925 [2024-11-10 15:21:04.262372] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:57.925 [2024-11-10 15:21:04.262439] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.925 [2024-11-10 15:21:04.262456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:57.925 [2024-11-10 15:21:04.262469] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.925 [2024-11-10 15:21:04.265080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.925 [2024-11-10 15:21:04.265197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:57.925 BaseBdev4 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.925 [2024-11-10 15:21:04.274396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.925 [2024-11-10 15:21:04.276582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.925 [2024-11-10 15:21:04.276708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:57.925 [2024-11-10 15:21:04.276769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.925 [2024-11-10 15:21:04.276979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:57.925 [2024-11-10 15:21:04.276994] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.925 [2024-11-10 15:21:04.277259] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:11:57.925 [2024-11-10 15:21:04.277422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:57.925 [2024-11-10 15:21:04.277433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:57.925 [2024-11-10 15:21:04.277573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.925 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.184 "name": "raid_bdev1", 00:11:58.184 "uuid": "6639be27-3e04-4c53-854f-bc029823c26d", 00:11:58.184 "strip_size_kb": 0, 00:11:58.184 "state": "online", 00:11:58.184 "raid_level": "raid1", 00:11:58.184 "superblock": true, 00:11:58.184 "num_base_bdevs": 4, 00:11:58.184 "num_base_bdevs_discovered": 4, 00:11:58.184 "num_base_bdevs_operational": 4, 00:11:58.184 "base_bdevs_list": [ 00:11:58.184 { 00:11:58.184 "name": "BaseBdev1", 00:11:58.184 "uuid": "54892c71-2446-528c-a249-22e9a60b4151", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 2048, 00:11:58.184 "data_size": 63488 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "BaseBdev2", 00:11:58.184 "uuid": "9a23cd1c-efb7-5d30-bb4d-5f758fe6d33e", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 2048, 00:11:58.184 "data_size": 63488 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "BaseBdev3", 00:11:58.184 "uuid": "15daba56-c282-5027-b93c-45a444103bf2", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 2048, 00:11:58.184 "data_size": 63488 00:11:58.184 }, 00:11:58.184 { 00:11:58.184 "name": "BaseBdev4", 00:11:58.184 "uuid": "acedf159-965b-5ac0-b772-bfd8097b4b4c", 00:11:58.184 "is_configured": true, 00:11:58.184 "data_offset": 2048, 00:11:58.184 "data_size": 63488 00:11:58.184 } 00:11:58.184 ] 00:11:58.184 }' 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.184 15:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.443 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:58.443 15:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:58.443 [2024-11-10 15:21:04.799047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.381 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.640 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.640 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.640 "name": "raid_bdev1", 00:11:59.640 "uuid": "6639be27-3e04-4c53-854f-bc029823c26d", 00:11:59.640 "strip_size_kb": 0, 00:11:59.640 "state": "online", 00:11:59.640 "raid_level": "raid1", 00:11:59.640 "superblock": true, 00:11:59.640 "num_base_bdevs": 4, 00:11:59.640 "num_base_bdevs_discovered": 4, 00:11:59.640 "num_base_bdevs_operational": 4, 00:11:59.640 "base_bdevs_list": [ 00:11:59.640 { 00:11:59.640 "name": "BaseBdev1", 00:11:59.640 "uuid": "54892c71-2446-528c-a249-22e9a60b4151", 00:11:59.640 "is_configured": true, 00:11:59.640 "data_offset": 2048, 00:11:59.640 "data_size": 63488 00:11:59.640 }, 00:11:59.640 { 00:11:59.640 "name": "BaseBdev2", 00:11:59.640 "uuid": "9a23cd1c-efb7-5d30-bb4d-5f758fe6d33e", 00:11:59.640 "is_configured": true, 00:11:59.640 "data_offset": 2048, 00:11:59.640 "data_size": 63488 00:11:59.640 }, 00:11:59.640 { 00:11:59.640 "name": "BaseBdev3", 00:11:59.640 "uuid": "15daba56-c282-5027-b93c-45a444103bf2", 00:11:59.640 "is_configured": true, 00:11:59.640 "data_offset": 2048, 00:11:59.640 "data_size": 63488 00:11:59.640 }, 00:11:59.640 { 00:11:59.640 "name": "BaseBdev4", 00:11:59.640 "uuid": "acedf159-965b-5ac0-b772-bfd8097b4b4c", 00:11:59.640 "is_configured": true, 00:11:59.641 "data_offset": 2048, 00:11:59.641 "data_size": 63488 00:11:59.641 } 00:11:59.641 ] 00:11:59.641 }' 00:11:59.641 15:21:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.641 15:21:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.900 [2024-11-10 15:21:06.188555] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:59.900 [2024-11-10 15:21:06.188611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.900 [2024-11-10 15:21:06.191275] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:59.900 [2024-11-10 15:21:06.191440] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.900 [2024-11-10 15:21:06.191588] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:59.900 [2024-11-10 15:21:06.191603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:59.900 { 00:11:59.900 "results": [ 00:11:59.900 { 00:11:59.900 "job": "raid_bdev1", 00:11:59.900 "core_mask": "0x1", 00:11:59.900 "workload": "randrw", 00:11:59.900 "percentage": 50, 00:11:59.900 "status": "finished", 00:11:59.900 "queue_depth": 1, 00:11:59.900 "io_size": 131072, 00:11:59.900 "runtime": 1.387112, 00:11:59.900 "iops": 8562.394384880241, 00:11:59.900 "mibps": 1070.2992981100301, 00:11:59.900 "io_failed": 0, 00:11:59.900 "io_timeout": 0, 00:11:59.900 "avg_latency_us": 114.2972130715968, 00:11:59.900 "min_latency_us": 22.313257212586073, 00:11:59.900 "max_latency_us": 1428.0484616055087 00:11:59.900 } 00:11:59.900 ], 00:11:59.900 "core_count": 1 00:11:59.900 } 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87146 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # '[' -z 87146 ']' 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # kill -0 87146 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # uname 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87146 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:59.900 killing process with pid 87146 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87146' 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@971 -- # kill 87146 00:11:59.900 [2024-11-10 15:21:06.227822] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:59.900 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@976 -- # wait 87146 00:12:00.160 [2024-11-10 15:21:06.295936] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ePqjhSCp8V 00:12:00.420 ************************************ 00:12:00.420 END TEST raid_read_error_test 00:12:00.420 ************************************ 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:00.420 00:12:00.420 real 0m3.480s 00:12:00.420 user 0m4.253s 00:12:00.420 sys 0m0.626s 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.420 15:21:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.420 15:21:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:00.420 15:21:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:12:00.420 15:21:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.420 15:21:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.420 ************************************ 00:12:00.420 START TEST raid_write_error_test 00:12:00.420 ************************************ 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1127 -- # raid_io_error_test raid1 4 write 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.em2FJRegVn 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=87275 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 87275 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # '[' -z 87275 ']' 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:00.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:00.420 15:21:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.680 [2024-11-10 15:21:06.792472] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:12:00.680 [2024-11-10 15:21:06.792598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87275 ] 00:12:00.680 [2024-11-10 15:21:06.925677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:00.680 [2024-11-10 15:21:06.964268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.680 [2024-11-10 15:21:07.003698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.941 [2024-11-10 15:21:07.080586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.941 [2024-11-10 15:21:07.080724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@866 -- # return 0 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 BaseBdev1_malloc 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 true 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.536 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.536 [2024-11-10 15:21:07.651887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:01.536 [2024-11-10 15:21:07.652043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.536 [2024-11-10 15:21:07.652068] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:01.536 [2024-11-10 15:21:07.652082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.537 [2024-11-10 15:21:07.654363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.537 [2024-11-10 15:21:07.654400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:01.537 BaseBdev1 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 BaseBdev2_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 true 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 [2024-11-10 15:21:07.698387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:01.537 [2024-11-10 15:21:07.698520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.537 [2024-11-10 15:21:07.698541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:01.537 [2024-11-10 15:21:07.698551] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.537 [2024-11-10 15:21:07.700938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.537 [2024-11-10 15:21:07.700974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:01.537 BaseBdev2 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 BaseBdev3_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 true 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 [2024-11-10 15:21:07.744949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:01.537 [2024-11-10 15:21:07.745087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.537 [2024-11-10 15:21:07.745108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:01.537 [2024-11-10 15:21:07.745119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.537 [2024-11-10 15:21:07.747314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.537 [2024-11-10 15:21:07.747367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:01.537 BaseBdev3 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 BaseBdev4_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 true 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 [2024-11-10 15:21:07.802526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:01.537 [2024-11-10 15:21:07.802658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:01.537 [2024-11-10 15:21:07.802681] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:01.537 [2024-11-10 15:21:07.802692] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:01.537 [2024-11-10 15:21:07.805073] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:01.537 [2024-11-10 15:21:07.805111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:01.537 BaseBdev4 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 [2024-11-10 15:21:07.814588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.537 [2024-11-10 15:21:07.816803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.537 [2024-11-10 15:21:07.816871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.537 [2024-11-10 15:21:07.816923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:01.537 [2024-11-10 15:21:07.817144] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:01.537 [2024-11-10 15:21:07.817158] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.537 [2024-11-10 15:21:07.817447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006cb0 00:12:01.537 [2024-11-10 15:21:07.817604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:01.537 [2024-11-10 15:21:07.817620] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:01.537 [2024-11-10 15:21:07.817761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.537 "name": "raid_bdev1", 00:12:01.537 "uuid": "e9ea1416-cfe7-4de3-bd10-23518ac76d52", 00:12:01.537 "strip_size_kb": 0, 00:12:01.537 "state": "online", 00:12:01.537 "raid_level": "raid1", 00:12:01.537 "superblock": true, 00:12:01.537 "num_base_bdevs": 4, 00:12:01.537 "num_base_bdevs_discovered": 4, 00:12:01.537 "num_base_bdevs_operational": 4, 00:12:01.537 "base_bdevs_list": [ 00:12:01.537 { 00:12:01.537 "name": "BaseBdev1", 00:12:01.537 "uuid": "098fe1b9-f57e-5f74-b9c7-3b662be0256f", 00:12:01.537 "is_configured": true, 00:12:01.537 "data_offset": 2048, 00:12:01.537 "data_size": 63488 00:12:01.537 }, 00:12:01.537 { 00:12:01.537 "name": "BaseBdev2", 00:12:01.537 "uuid": "b51d3a46-2b9c-51b3-bf51-8cbb3188d547", 00:12:01.537 "is_configured": true, 00:12:01.537 "data_offset": 2048, 00:12:01.537 "data_size": 63488 00:12:01.537 }, 00:12:01.537 { 00:12:01.537 "name": "BaseBdev3", 00:12:01.537 "uuid": "c0d0d06e-232f-5e1d-ba4f-1e3a60e07727", 00:12:01.537 "is_configured": true, 00:12:01.537 "data_offset": 2048, 00:12:01.537 "data_size": 63488 00:12:01.537 }, 00:12:01.537 { 00:12:01.537 "name": "BaseBdev4", 00:12:01.537 "uuid": "401b3611-8e69-5c7d-9826-820e29602d4d", 00:12:01.537 "is_configured": true, 00:12:01.537 "data_offset": 2048, 00:12:01.537 "data_size": 63488 00:12:01.537 } 00:12:01.537 ] 00:12:01.537 }' 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.537 15:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.107 15:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:02.107 15:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:02.107 [2024-11-10 15:21:08.359217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006e50 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.046 [2024-11-10 15:21:09.273930] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:03.046 [2024-11-10 15:21:09.274137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:03.046 [2024-11-10 15:21:09.274435] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006e50 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.046 "name": "raid_bdev1", 00:12:03.046 "uuid": "e9ea1416-cfe7-4de3-bd10-23518ac76d52", 00:12:03.046 "strip_size_kb": 0, 00:12:03.046 "state": "online", 00:12:03.046 "raid_level": "raid1", 00:12:03.046 "superblock": true, 00:12:03.046 "num_base_bdevs": 4, 00:12:03.046 "num_base_bdevs_discovered": 3, 00:12:03.046 "num_base_bdevs_operational": 3, 00:12:03.046 "base_bdevs_list": [ 00:12:03.046 { 00:12:03.046 "name": null, 00:12:03.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.046 "is_configured": false, 00:12:03.046 "data_offset": 0, 00:12:03.046 "data_size": 63488 00:12:03.046 }, 00:12:03.046 { 00:12:03.046 "name": "BaseBdev2", 00:12:03.046 "uuid": "b51d3a46-2b9c-51b3-bf51-8cbb3188d547", 00:12:03.046 "is_configured": true, 00:12:03.046 "data_offset": 2048, 00:12:03.046 "data_size": 63488 00:12:03.046 }, 00:12:03.046 { 00:12:03.046 "name": "BaseBdev3", 00:12:03.046 "uuid": "c0d0d06e-232f-5e1d-ba4f-1e3a60e07727", 00:12:03.046 "is_configured": true, 00:12:03.046 "data_offset": 2048, 00:12:03.046 "data_size": 63488 00:12:03.046 }, 00:12:03.046 { 00:12:03.046 "name": "BaseBdev4", 00:12:03.046 "uuid": "401b3611-8e69-5c7d-9826-820e29602d4d", 00:12:03.046 "is_configured": true, 00:12:03.046 "data_offset": 2048, 00:12:03.046 "data_size": 63488 00:12:03.046 } 00:12:03.046 ] 00:12:03.046 }' 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.046 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.616 [2024-11-10 15:21:09.724575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.616 [2024-11-10 15:21:09.724636] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.616 [2024-11-10 15:21:09.727174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.616 [2024-11-10 15:21:09.727261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.616 [2024-11-10 15:21:09.727435] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.616 [2024-11-10 15:21:09.727484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.616 { 00:12:03.616 "results": [ 00:12:03.616 { 00:12:03.616 "job": "raid_bdev1", 00:12:03.616 "core_mask": "0x1", 00:12:03.616 "workload": "randrw", 00:12:03.616 "percentage": 50, 00:12:03.616 "status": "finished", 00:12:03.616 "queue_depth": 1, 00:12:03.616 "io_size": 131072, 00:12:03.616 "runtime": 1.362913, 00:12:03.616 "iops": 9247.839003663477, 00:12:03.616 "mibps": 1155.9798754579347, 00:12:03.616 "io_failed": 0, 00:12:03.616 "io_timeout": 0, 00:12:03.616 "avg_latency_us": 105.59067398259815, 00:12:03.616 "min_latency_us": 22.424823498649, 00:12:03.616 "max_latency_us": 1449.4691885295913 00:12:03.616 } 00:12:03.616 ], 00:12:03.616 "core_count": 1 00:12:03.616 } 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 87275 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # '[' -z 87275 ']' 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # kill -0 87275 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # uname 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87275 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87275' 00:12:03.616 killing process with pid 87275 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@971 -- # kill 87275 00:12:03.616 [2024-11-10 15:21:09.774270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.616 15:21:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@976 -- # wait 87275 00:12:03.616 [2024-11-10 15:21:09.840020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.em2FJRegVn 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:03.876 00:12:03.876 real 0m3.475s 00:12:03.876 user 0m4.247s 00:12:03.876 sys 0m0.630s 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:03.876 ************************************ 00:12:03.876 END TEST raid_write_error_test 00:12:03.876 ************************************ 00:12:03.876 15:21:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.876 15:21:10 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:03.876 15:21:10 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:03.876 15:21:10 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:03.876 15:21:10 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:03.876 15:21:10 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:03.876 15:21:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.136 ************************************ 00:12:04.136 START TEST raid_rebuild_test 00:12:04.136 ************************************ 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false false true 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:04.136 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87408 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87408 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 87408 ']' 00:12:04.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:04.137 15:21:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.137 [2024-11-10 15:21:10.332168] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:12:04.137 [2024-11-10 15:21:10.332363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:12:04.137 Zero copy mechanism will not be used. 00:12:04.137 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87408 ] 00:12:04.137 [2024-11-10 15:21:10.465130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:04.396 [2024-11-10 15:21:10.505084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.396 [2024-11-10 15:21:10.546059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.396 [2024-11-10 15:21:10.624338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.396 [2024-11-10 15:21:10.624491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:04.964 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 BaseBdev1_malloc 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 [2024-11-10 15:21:11.192148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:04.965 [2024-11-10 15:21:11.192230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.965 [2024-11-10 15:21:11.192261] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:04.965 [2024-11-10 15:21:11.192279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.965 [2024-11-10 15:21:11.194811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.965 [2024-11-10 15:21:11.194851] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.965 BaseBdev1 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 BaseBdev2_malloc 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 [2024-11-10 15:21:11.226914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:04.965 [2024-11-10 15:21:11.227056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.965 [2024-11-10 15:21:11.227093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:04.965 [2024-11-10 15:21:11.227124] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.965 [2024-11-10 15:21:11.229543] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.965 [2024-11-10 15:21:11.229617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:04.965 BaseBdev2 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 spare_malloc 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 spare_delay 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 [2024-11-10 15:21:11.273365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:04.965 [2024-11-10 15:21:11.273420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.965 [2024-11-10 15:21:11.273439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:04.965 [2024-11-10 15:21:11.273452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.965 [2024-11-10 15:21:11.275749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.965 [2024-11-10 15:21:11.275786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:04.965 spare 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 [2024-11-10 15:21:11.285426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:04.965 [2024-11-10 15:21:11.287436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.965 [2024-11-10 15:21:11.287512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:04.965 [2024-11-10 15:21:11.287523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:04.965 [2024-11-10 15:21:11.287782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:04.965 [2024-11-10 15:21:11.287915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:04.965 [2024-11-10 15:21:11.287925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:04.965 [2024-11-10 15:21:11.288119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.965 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.225 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.225 "name": "raid_bdev1", 00:12:05.225 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:05.225 "strip_size_kb": 0, 00:12:05.225 "state": "online", 00:12:05.225 "raid_level": "raid1", 00:12:05.225 "superblock": false, 00:12:05.225 "num_base_bdevs": 2, 00:12:05.225 "num_base_bdevs_discovered": 2, 00:12:05.225 "num_base_bdevs_operational": 2, 00:12:05.225 "base_bdevs_list": [ 00:12:05.225 { 00:12:05.225 "name": "BaseBdev1", 00:12:05.225 "uuid": "51d33748-ddbf-56cc-b7d4-4ab29120f1c0", 00:12:05.225 "is_configured": true, 00:12:05.225 "data_offset": 0, 00:12:05.225 "data_size": 65536 00:12:05.225 }, 00:12:05.225 { 00:12:05.225 "name": "BaseBdev2", 00:12:05.225 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:05.225 "is_configured": true, 00:12:05.225 "data_offset": 0, 00:12:05.225 "data_size": 65536 00:12:05.225 } 00:12:05.225 ] 00:12:05.225 }' 00:12:05.225 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.225 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:05.484 [2024-11-10 15:21:11.709834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:05.484 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:05.485 15:21:11 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:05.744 [2024-11-10 15:21:11.985686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:05.744 /dev/nbd0 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:05.744 1+0 records in 00:12:05.744 1+0 records out 00:12:05.744 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026396 s, 15.5 MB/s 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:05.744 15:21:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:11.033 65536+0 records in 00:12:11.033 65536+0 records out 00:12:11.033 33554432 bytes (34 MB, 32 MiB) copied, 4.39868 s, 7.6 MB/s 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:11.033 [2024-11-10 15:21:16.636378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.033 [2024-11-10 15:21:16.672491] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.033 "name": "raid_bdev1", 00:12:11.033 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:11.033 "strip_size_kb": 0, 00:12:11.033 "state": "online", 00:12:11.033 "raid_level": "raid1", 00:12:11.033 "superblock": false, 00:12:11.033 "num_base_bdevs": 2, 00:12:11.033 "num_base_bdevs_discovered": 1, 00:12:11.033 "num_base_bdevs_operational": 1, 00:12:11.033 "base_bdevs_list": [ 00:12:11.033 { 00:12:11.033 "name": null, 00:12:11.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.033 "is_configured": false, 00:12:11.033 "data_offset": 0, 00:12:11.033 "data_size": 65536 00:12:11.033 }, 00:12:11.033 { 00:12:11.033 "name": "BaseBdev2", 00:12:11.033 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:11.033 "is_configured": true, 00:12:11.033 "data_offset": 0, 00:12:11.033 "data_size": 65536 00:12:11.033 } 00:12:11.033 ] 00:12:11.033 }' 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.033 15:21:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.033 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.033 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.033 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.033 [2024-11-10 15:21:17.148672] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.033 [2024-11-10 15:21:17.172925] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09fe0 00:12:11.033 15:21:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.034 15:21:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:11.034 [2024-11-10 15:21:17.175613] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.971 "name": "raid_bdev1", 00:12:11.971 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:11.971 "strip_size_kb": 0, 00:12:11.971 "state": "online", 00:12:11.971 "raid_level": "raid1", 00:12:11.971 "superblock": false, 00:12:11.971 "num_base_bdevs": 2, 00:12:11.971 "num_base_bdevs_discovered": 2, 00:12:11.971 "num_base_bdevs_operational": 2, 00:12:11.971 "process": { 00:12:11.971 "type": "rebuild", 00:12:11.971 "target": "spare", 00:12:11.971 "progress": { 00:12:11.971 "blocks": 20480, 00:12:11.971 "percent": 31 00:12:11.971 } 00:12:11.971 }, 00:12:11.971 "base_bdevs_list": [ 00:12:11.971 { 00:12:11.971 "name": "spare", 00:12:11.971 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:11.971 "is_configured": true, 00:12:11.971 "data_offset": 0, 00:12:11.971 "data_size": 65536 00:12:11.971 }, 00:12:11.971 { 00:12:11.971 "name": "BaseBdev2", 00:12:11.971 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:11.971 "is_configured": true, 00:12:11.971 "data_offset": 0, 00:12:11.971 "data_size": 65536 00:12:11.971 } 00:12:11.971 ] 00:12:11.971 }' 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.971 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.971 [2024-11-10 15:21:18.309865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.230 [2024-11-10 15:21:18.386664] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:12.231 [2024-11-10 15:21:18.386759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.231 [2024-11-10 15:21:18.386777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:12.231 [2024-11-10 15:21:18.386789] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.231 "name": "raid_bdev1", 00:12:12.231 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:12.231 "strip_size_kb": 0, 00:12:12.231 "state": "online", 00:12:12.231 "raid_level": "raid1", 00:12:12.231 "superblock": false, 00:12:12.231 "num_base_bdevs": 2, 00:12:12.231 "num_base_bdevs_discovered": 1, 00:12:12.231 "num_base_bdevs_operational": 1, 00:12:12.231 "base_bdevs_list": [ 00:12:12.231 { 00:12:12.231 "name": null, 00:12:12.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.231 "is_configured": false, 00:12:12.231 "data_offset": 0, 00:12:12.231 "data_size": 65536 00:12:12.231 }, 00:12:12.231 { 00:12:12.231 "name": "BaseBdev2", 00:12:12.231 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:12.231 "is_configured": true, 00:12:12.231 "data_offset": 0, 00:12:12.231 "data_size": 65536 00:12:12.231 } 00:12:12.231 ] 00:12:12.231 }' 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.231 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.490 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:12.749 "name": "raid_bdev1", 00:12:12.749 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:12.749 "strip_size_kb": 0, 00:12:12.749 "state": "online", 00:12:12.749 "raid_level": "raid1", 00:12:12.749 "superblock": false, 00:12:12.749 "num_base_bdevs": 2, 00:12:12.749 "num_base_bdevs_discovered": 1, 00:12:12.749 "num_base_bdevs_operational": 1, 00:12:12.749 "base_bdevs_list": [ 00:12:12.749 { 00:12:12.749 "name": null, 00:12:12.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.749 "is_configured": false, 00:12:12.749 "data_offset": 0, 00:12:12.749 "data_size": 65536 00:12:12.749 }, 00:12:12.749 { 00:12:12.749 "name": "BaseBdev2", 00:12:12.749 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:12.749 "is_configured": true, 00:12:12.749 "data_offset": 0, 00:12:12.749 "data_size": 65536 00:12:12.749 } 00:12:12.749 ] 00:12:12.749 }' 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.749 [2024-11-10 15:21:18.959747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.749 [2024-11-10 15:21:18.968568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a0b0 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.749 15:21:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:12.749 [2024-11-10 15:21:18.970646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.726 15:21:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.726 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.726 "name": "raid_bdev1", 00:12:13.726 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:13.726 "strip_size_kb": 0, 00:12:13.726 "state": "online", 00:12:13.726 "raid_level": "raid1", 00:12:13.726 "superblock": false, 00:12:13.726 "num_base_bdevs": 2, 00:12:13.726 "num_base_bdevs_discovered": 2, 00:12:13.726 "num_base_bdevs_operational": 2, 00:12:13.726 "process": { 00:12:13.726 "type": "rebuild", 00:12:13.726 "target": "spare", 00:12:13.726 "progress": { 00:12:13.726 "blocks": 20480, 00:12:13.726 "percent": 31 00:12:13.726 } 00:12:13.726 }, 00:12:13.726 "base_bdevs_list": [ 00:12:13.726 { 00:12:13.726 "name": "spare", 00:12:13.726 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:13.726 "is_configured": true, 00:12:13.726 "data_offset": 0, 00:12:13.726 "data_size": 65536 00:12:13.726 }, 00:12:13.726 { 00:12:13.726 "name": "BaseBdev2", 00:12:13.726 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:13.726 "is_configured": true, 00:12:13.726 "data_offset": 0, 00:12:13.726 "data_size": 65536 00:12:13.726 } 00:12:13.726 ] 00:12:13.726 }' 00:12:13.726 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.726 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.727 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=294 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.986 "name": "raid_bdev1", 00:12:13.986 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:13.986 "strip_size_kb": 0, 00:12:13.986 "state": "online", 00:12:13.986 "raid_level": "raid1", 00:12:13.986 "superblock": false, 00:12:13.986 "num_base_bdevs": 2, 00:12:13.986 "num_base_bdevs_discovered": 2, 00:12:13.986 "num_base_bdevs_operational": 2, 00:12:13.986 "process": { 00:12:13.986 "type": "rebuild", 00:12:13.986 "target": "spare", 00:12:13.986 "progress": { 00:12:13.986 "blocks": 22528, 00:12:13.986 "percent": 34 00:12:13.986 } 00:12:13.986 }, 00:12:13.986 "base_bdevs_list": [ 00:12:13.986 { 00:12:13.986 "name": "spare", 00:12:13.986 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:13.986 "is_configured": true, 00:12:13.986 "data_offset": 0, 00:12:13.986 "data_size": 65536 00:12:13.986 }, 00:12:13.986 { 00:12:13.986 "name": "BaseBdev2", 00:12:13.986 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:13.986 "is_configured": true, 00:12:13.986 "data_offset": 0, 00:12:13.986 "data_size": 65536 00:12:13.986 } 00:12:13.986 ] 00:12:13.986 }' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.986 15:21:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.924 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.183 "name": "raid_bdev1", 00:12:15.183 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:15.183 "strip_size_kb": 0, 00:12:15.183 "state": "online", 00:12:15.183 "raid_level": "raid1", 00:12:15.183 "superblock": false, 00:12:15.183 "num_base_bdevs": 2, 00:12:15.183 "num_base_bdevs_discovered": 2, 00:12:15.183 "num_base_bdevs_operational": 2, 00:12:15.183 "process": { 00:12:15.183 "type": "rebuild", 00:12:15.183 "target": "spare", 00:12:15.183 "progress": { 00:12:15.183 "blocks": 45056, 00:12:15.183 "percent": 68 00:12:15.183 } 00:12:15.183 }, 00:12:15.183 "base_bdevs_list": [ 00:12:15.183 { 00:12:15.183 "name": "spare", 00:12:15.183 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:15.183 "is_configured": true, 00:12:15.183 "data_offset": 0, 00:12:15.183 "data_size": 65536 00:12:15.183 }, 00:12:15.183 { 00:12:15.183 "name": "BaseBdev2", 00:12:15.183 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:15.183 "is_configured": true, 00:12:15.183 "data_offset": 0, 00:12:15.183 "data_size": 65536 00:12:15.183 } 00:12:15.183 ] 00:12:15.183 }' 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.183 15:21:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.121 [2024-11-10 15:21:22.198580] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:16.121 [2024-11-10 15:21:22.198769] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:16.121 [2024-11-10 15:21:22.198840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.121 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.121 "name": "raid_bdev1", 00:12:16.121 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:16.121 "strip_size_kb": 0, 00:12:16.121 "state": "online", 00:12:16.121 "raid_level": "raid1", 00:12:16.121 "superblock": false, 00:12:16.121 "num_base_bdevs": 2, 00:12:16.121 "num_base_bdevs_discovered": 2, 00:12:16.121 "num_base_bdevs_operational": 2, 00:12:16.121 "base_bdevs_list": [ 00:12:16.121 { 00:12:16.121 "name": "spare", 00:12:16.121 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:16.121 "is_configured": true, 00:12:16.121 "data_offset": 0, 00:12:16.121 "data_size": 65536 00:12:16.121 }, 00:12:16.121 { 00:12:16.121 "name": "BaseBdev2", 00:12:16.121 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:16.121 "is_configured": true, 00:12:16.121 "data_offset": 0, 00:12:16.121 "data_size": 65536 00:12:16.121 } 00:12:16.122 ] 00:12:16.122 }' 00:12:16.122 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.381 "name": "raid_bdev1", 00:12:16.381 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:16.381 "strip_size_kb": 0, 00:12:16.381 "state": "online", 00:12:16.381 "raid_level": "raid1", 00:12:16.381 "superblock": false, 00:12:16.381 "num_base_bdevs": 2, 00:12:16.381 "num_base_bdevs_discovered": 2, 00:12:16.381 "num_base_bdevs_operational": 2, 00:12:16.381 "base_bdevs_list": [ 00:12:16.381 { 00:12:16.381 "name": "spare", 00:12:16.381 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:16.381 "is_configured": true, 00:12:16.381 "data_offset": 0, 00:12:16.381 "data_size": 65536 00:12:16.381 }, 00:12:16.381 { 00:12:16.381 "name": "BaseBdev2", 00:12:16.381 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:16.381 "is_configured": true, 00:12:16.381 "data_offset": 0, 00:12:16.381 "data_size": 65536 00:12:16.381 } 00:12:16.381 ] 00:12:16.381 }' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.381 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.382 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.382 "name": "raid_bdev1", 00:12:16.382 "uuid": "78aa7798-366b-45ba-9287-d992d1b7337b", 00:12:16.382 "strip_size_kb": 0, 00:12:16.382 "state": "online", 00:12:16.382 "raid_level": "raid1", 00:12:16.382 "superblock": false, 00:12:16.382 "num_base_bdevs": 2, 00:12:16.382 "num_base_bdevs_discovered": 2, 00:12:16.382 "num_base_bdevs_operational": 2, 00:12:16.382 "base_bdevs_list": [ 00:12:16.382 { 00:12:16.382 "name": "spare", 00:12:16.382 "uuid": "e59c68fc-080c-5638-975b-dded3b5955b1", 00:12:16.382 "is_configured": true, 00:12:16.382 "data_offset": 0, 00:12:16.382 "data_size": 65536 00:12:16.382 }, 00:12:16.382 { 00:12:16.382 "name": "BaseBdev2", 00:12:16.382 "uuid": "c6a14b25-bb78-58ee-ba43-88f7e0ef62b4", 00:12:16.382 "is_configured": true, 00:12:16.382 "data_offset": 0, 00:12:16.382 "data_size": 65536 00:12:16.382 } 00:12:16.382 ] 00:12:16.382 }' 00:12:16.382 15:21:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.382 15:21:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.950 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.950 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.950 [2024-11-10 15:21:23.119484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.950 [2024-11-10 15:21:23.119628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.950 [2024-11-10 15:21:23.119750] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.950 [2024-11-10 15:21:23.119849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.951 [2024-11-10 15:21:23.119905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:16.951 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:17.209 /dev/nbd0 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.209 1+0 records in 00:12:17.209 1+0 records out 00:12:17.209 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336677 s, 12.2 MB/s 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.209 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.210 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:17.468 /dev/nbd1 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.468 1+0 records in 00:12:17.468 1+0 records out 00:12:17.468 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525309 s, 7.8 MB/s 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.468 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:17.727 15:21:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87408 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 87408 ']' 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 87408 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87408 00:12:17.987 killing process with pid 87408 00:12:17.987 Received shutdown signal, test time was about 60.000000 seconds 00:12:17.987 00:12:17.987 Latency(us) 00:12:17.987 [2024-11-10T15:21:24.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.987 [2024-11-10T15:21:24.350Z] =================================================================================================================== 00:12:17.987 [2024-11-10T15:21:24.350Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:17.987 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87408' 00:12:17.988 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 87408 00:12:17.988 [2024-11-10 15:21:24.218089] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.988 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 87408 00:12:17.988 [2024-11-10 15:21:24.249674] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:18.247 00:12:18.247 real 0m14.220s 00:12:18.247 user 0m15.742s 00:12:18.247 sys 0m3.160s 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.247 ************************************ 00:12:18.247 END TEST raid_rebuild_test 00:12:18.247 ************************************ 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.247 15:21:24 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:18.247 15:21:24 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:18.247 15:21:24 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.247 15:21:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.247 ************************************ 00:12:18.247 START TEST raid_rebuild_test_sb 00:12:18.247 ************************************ 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=87820 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 87820 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 87820 ']' 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:18.247 15:21:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.505 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:18.506 Zero copy mechanism will not be used. 00:12:18.506 [2024-11-10 15:21:24.626490] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:12:18.506 [2024-11-10 15:21:24.626613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87820 ] 00:12:18.506 [2024-11-10 15:21:24.757994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:18.506 [2024-11-10 15:21:24.797119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.506 [2024-11-10 15:21:24.821671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.506 [2024-11-10 15:21:24.864055] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.506 [2024-11-10 15:21:24.864102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.444 BaseBdev1_malloc 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.444 [2024-11-10 15:21:25.482525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.444 [2024-11-10 15:21:25.482592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.444 [2024-11-10 15:21:25.482624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.444 [2024-11-10 15:21:25.482639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.444 [2024-11-10 15:21:25.484755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.444 [2024-11-10 15:21:25.484794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.444 BaseBdev1 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.444 BaseBdev2_malloc 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.444 [2024-11-10 15:21:25.510932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:19.444 [2024-11-10 15:21:25.510984] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.444 [2024-11-10 15:21:25.511001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.444 [2024-11-10 15:21:25.511026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.444 [2024-11-10 15:21:25.513056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.444 [2024-11-10 15:21:25.513144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.444 BaseBdev2 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.444 spare_malloc 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.444 spare_delay 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.444 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.445 [2024-11-10 15:21:25.551248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.445 [2024-11-10 15:21:25.551308] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.445 [2024-11-10 15:21:25.551336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:19.445 [2024-11-10 15:21:25.551350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.445 [2024-11-10 15:21:25.553409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.445 [2024-11-10 15:21:25.553447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.445 spare 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.445 [2024-11-10 15:21:25.563312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.445 [2024-11-10 15:21:25.565206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.445 [2024-11-10 15:21:25.565343] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:19.445 [2024-11-10 15:21:25.565358] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.445 [2024-11-10 15:21:25.565601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:19.445 [2024-11-10 15:21:25.565728] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:19.445 [2024-11-10 15:21:25.565737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:19.445 [2024-11-10 15:21:25.565843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.445 "name": "raid_bdev1", 00:12:19.445 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:19.445 "strip_size_kb": 0, 00:12:19.445 "state": "online", 00:12:19.445 "raid_level": "raid1", 00:12:19.445 "superblock": true, 00:12:19.445 "num_base_bdevs": 2, 00:12:19.445 "num_base_bdevs_discovered": 2, 00:12:19.445 "num_base_bdevs_operational": 2, 00:12:19.445 "base_bdevs_list": [ 00:12:19.445 { 00:12:19.445 "name": "BaseBdev1", 00:12:19.445 "uuid": "3e1aedb6-affb-5322-a70f-f9f5dd04f8fb", 00:12:19.445 "is_configured": true, 00:12:19.445 "data_offset": 2048, 00:12:19.445 "data_size": 63488 00:12:19.445 }, 00:12:19.445 { 00:12:19.445 "name": "BaseBdev2", 00:12:19.445 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:19.445 "is_configured": true, 00:12:19.445 "data_offset": 2048, 00:12:19.445 "data_size": 63488 00:12:19.445 } 00:12:19.445 ] 00:12:19.445 }' 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.445 15:21:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.704 [2024-11-10 15:21:26.035843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.704 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:19.962 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:19.962 [2024-11-10 15:21:26.303609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:19.962 /dev/nbd0 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:20.221 1+0 records in 00:12:20.221 1+0 records out 00:12:20.221 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335159 s, 12.2 MB/s 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:20.221 15:21:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:24.417 63488+0 records in 00:12:24.417 63488+0 records out 00:12:24.417 32505856 bytes (33 MB, 31 MiB) copied, 4.09535 s, 7.9 MB/s 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:24.417 [2024-11-10 15:21:30.674187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.417 [2024-11-10 15:21:30.710306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.417 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.680 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.680 "name": "raid_bdev1", 00:12:24.680 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:24.680 "strip_size_kb": 0, 00:12:24.680 "state": "online", 00:12:24.680 "raid_level": "raid1", 00:12:24.680 "superblock": true, 00:12:24.680 "num_base_bdevs": 2, 00:12:24.680 "num_base_bdevs_discovered": 1, 00:12:24.680 "num_base_bdevs_operational": 1, 00:12:24.680 "base_bdevs_list": [ 00:12:24.680 { 00:12:24.680 "name": null, 00:12:24.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.680 "is_configured": false, 00:12:24.680 "data_offset": 0, 00:12:24.680 "data_size": 63488 00:12:24.680 }, 00:12:24.680 { 00:12:24.680 "name": "BaseBdev2", 00:12:24.680 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:24.680 "is_configured": true, 00:12:24.680 "data_offset": 2048, 00:12:24.680 "data_size": 63488 00:12:24.680 } 00:12:24.680 ] 00:12:24.680 }' 00:12:24.680 15:21:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.680 15:21:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.940 15:21:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:24.940 15:21:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.940 15:21:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.940 [2024-11-10 15:21:31.186435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:24.940 [2024-11-10 15:21:31.200921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3770 00:12:24.940 15:21:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.940 15:21:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:24.940 [2024-11-10 15:21:31.203402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.879 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.138 "name": "raid_bdev1", 00:12:26.138 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:26.138 "strip_size_kb": 0, 00:12:26.138 "state": "online", 00:12:26.138 "raid_level": "raid1", 00:12:26.138 "superblock": true, 00:12:26.138 "num_base_bdevs": 2, 00:12:26.138 "num_base_bdevs_discovered": 2, 00:12:26.138 "num_base_bdevs_operational": 2, 00:12:26.138 "process": { 00:12:26.138 "type": "rebuild", 00:12:26.138 "target": "spare", 00:12:26.138 "progress": { 00:12:26.138 "blocks": 20480, 00:12:26.138 "percent": 32 00:12:26.138 } 00:12:26.138 }, 00:12:26.138 "base_bdevs_list": [ 00:12:26.138 { 00:12:26.138 "name": "spare", 00:12:26.138 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:26.138 "is_configured": true, 00:12:26.138 "data_offset": 2048, 00:12:26.138 "data_size": 63488 00:12:26.138 }, 00:12:26.138 { 00:12:26.138 "name": "BaseBdev2", 00:12:26.138 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:26.138 "is_configured": true, 00:12:26.138 "data_offset": 2048, 00:12:26.138 "data_size": 63488 00:12:26.138 } 00:12:26.138 ] 00:12:26.138 }' 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.138 [2024-11-10 15:21:32.361186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.138 [2024-11-10 15:21:32.410943] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:26.138 [2024-11-10 15:21:32.411147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.138 [2024-11-10 15:21:32.411166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:26.138 [2024-11-10 15:21:32.411189] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.138 "name": "raid_bdev1", 00:12:26.138 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:26.138 "strip_size_kb": 0, 00:12:26.138 "state": "online", 00:12:26.138 "raid_level": "raid1", 00:12:26.138 "superblock": true, 00:12:26.138 "num_base_bdevs": 2, 00:12:26.138 "num_base_bdevs_discovered": 1, 00:12:26.138 "num_base_bdevs_operational": 1, 00:12:26.138 "base_bdevs_list": [ 00:12:26.138 { 00:12:26.138 "name": null, 00:12:26.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.138 "is_configured": false, 00:12:26.138 "data_offset": 0, 00:12:26.138 "data_size": 63488 00:12:26.138 }, 00:12:26.138 { 00:12:26.138 "name": "BaseBdev2", 00:12:26.138 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:26.138 "is_configured": true, 00:12:26.138 "data_offset": 2048, 00:12:26.138 "data_size": 63488 00:12:26.138 } 00:12:26.138 ] 00:12:26.138 }' 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.138 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.707 "name": "raid_bdev1", 00:12:26.707 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:26.707 "strip_size_kb": 0, 00:12:26.707 "state": "online", 00:12:26.707 "raid_level": "raid1", 00:12:26.707 "superblock": true, 00:12:26.707 "num_base_bdevs": 2, 00:12:26.707 "num_base_bdevs_discovered": 1, 00:12:26.707 "num_base_bdevs_operational": 1, 00:12:26.707 "base_bdevs_list": [ 00:12:26.707 { 00:12:26.707 "name": null, 00:12:26.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.707 "is_configured": false, 00:12:26.707 "data_offset": 0, 00:12:26.707 "data_size": 63488 00:12:26.707 }, 00:12:26.707 { 00:12:26.707 "name": "BaseBdev2", 00:12:26.707 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:26.707 "is_configured": true, 00:12:26.707 "data_offset": 2048, 00:12:26.707 "data_size": 63488 00:12:26.707 } 00:12:26.707 ] 00:12:26.707 }' 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:26.707 15:21:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.707 15:21:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:26.707 15:21:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:26.707 15:21:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.707 15:21:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.707 [2024-11-10 15:21:33.048571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:26.707 [2024-11-10 15:21:33.053491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3840 00:12:26.707 15:21:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.707 15:21:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:26.707 [2024-11-10 15:21:33.055305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.088 "name": "raid_bdev1", 00:12:28.088 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:28.088 "strip_size_kb": 0, 00:12:28.088 "state": "online", 00:12:28.088 "raid_level": "raid1", 00:12:28.088 "superblock": true, 00:12:28.088 "num_base_bdevs": 2, 00:12:28.088 "num_base_bdevs_discovered": 2, 00:12:28.088 "num_base_bdevs_operational": 2, 00:12:28.088 "process": { 00:12:28.088 "type": "rebuild", 00:12:28.088 "target": "spare", 00:12:28.088 "progress": { 00:12:28.088 "blocks": 20480, 00:12:28.088 "percent": 32 00:12:28.088 } 00:12:28.088 }, 00:12:28.088 "base_bdevs_list": [ 00:12:28.088 { 00:12:28.088 "name": "spare", 00:12:28.088 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:28.088 "is_configured": true, 00:12:28.088 "data_offset": 2048, 00:12:28.088 "data_size": 63488 00:12:28.088 }, 00:12:28.088 { 00:12:28.088 "name": "BaseBdev2", 00:12:28.088 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:28.088 "is_configured": true, 00:12:28.088 "data_offset": 2048, 00:12:28.088 "data_size": 63488 00:12:28.088 } 00:12:28.088 ] 00:12:28.088 }' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:28.088 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=308 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.088 "name": "raid_bdev1", 00:12:28.088 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:28.088 "strip_size_kb": 0, 00:12:28.088 "state": "online", 00:12:28.088 "raid_level": "raid1", 00:12:28.088 "superblock": true, 00:12:28.088 "num_base_bdevs": 2, 00:12:28.088 "num_base_bdevs_discovered": 2, 00:12:28.088 "num_base_bdevs_operational": 2, 00:12:28.088 "process": { 00:12:28.088 "type": "rebuild", 00:12:28.088 "target": "spare", 00:12:28.088 "progress": { 00:12:28.088 "blocks": 22528, 00:12:28.088 "percent": 35 00:12:28.088 } 00:12:28.088 }, 00:12:28.088 "base_bdevs_list": [ 00:12:28.088 { 00:12:28.088 "name": "spare", 00:12:28.088 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:28.088 "is_configured": true, 00:12:28.088 "data_offset": 2048, 00:12:28.088 "data_size": 63488 00:12:28.088 }, 00:12:28.088 { 00:12:28.088 "name": "BaseBdev2", 00:12:28.088 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:28.088 "is_configured": true, 00:12:28.088 "data_offset": 2048, 00:12:28.088 "data_size": 63488 00:12:28.088 } 00:12:28.088 ] 00:12:28.088 }' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:28.088 15:21:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:29.026 "name": "raid_bdev1", 00:12:29.026 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:29.026 "strip_size_kb": 0, 00:12:29.026 "state": "online", 00:12:29.026 "raid_level": "raid1", 00:12:29.026 "superblock": true, 00:12:29.026 "num_base_bdevs": 2, 00:12:29.026 "num_base_bdevs_discovered": 2, 00:12:29.026 "num_base_bdevs_operational": 2, 00:12:29.026 "process": { 00:12:29.026 "type": "rebuild", 00:12:29.026 "target": "spare", 00:12:29.026 "progress": { 00:12:29.026 "blocks": 45056, 00:12:29.026 "percent": 70 00:12:29.026 } 00:12:29.026 }, 00:12:29.026 "base_bdevs_list": [ 00:12:29.026 { 00:12:29.026 "name": "spare", 00:12:29.026 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:29.026 "is_configured": true, 00:12:29.026 "data_offset": 2048, 00:12:29.026 "data_size": 63488 00:12:29.026 }, 00:12:29.026 { 00:12:29.026 "name": "BaseBdev2", 00:12:29.026 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:29.026 "is_configured": true, 00:12:29.026 "data_offset": 2048, 00:12:29.026 "data_size": 63488 00:12:29.026 } 00:12:29.026 ] 00:12:29.026 }' 00:12:29.026 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:29.286 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:29.286 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:29.286 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:29.286 15:21:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:29.854 [2024-11-10 15:21:36.173103] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:29.854 [2024-11-10 15:21:36.173327] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:29.854 [2024-11-10 15:21:36.173489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.424 "name": "raid_bdev1", 00:12:30.424 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:30.424 "strip_size_kb": 0, 00:12:30.424 "state": "online", 00:12:30.424 "raid_level": "raid1", 00:12:30.424 "superblock": true, 00:12:30.424 "num_base_bdevs": 2, 00:12:30.424 "num_base_bdevs_discovered": 2, 00:12:30.424 "num_base_bdevs_operational": 2, 00:12:30.424 "base_bdevs_list": [ 00:12:30.424 { 00:12:30.424 "name": "spare", 00:12:30.424 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:30.424 "is_configured": true, 00:12:30.424 "data_offset": 2048, 00:12:30.424 "data_size": 63488 00:12:30.424 }, 00:12:30.424 { 00:12:30.424 "name": "BaseBdev2", 00:12:30.424 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:30.424 "is_configured": true, 00:12:30.424 "data_offset": 2048, 00:12:30.424 "data_size": 63488 00:12:30.424 } 00:12:30.424 ] 00:12:30.424 }' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:30.424 "name": "raid_bdev1", 00:12:30.424 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:30.424 "strip_size_kb": 0, 00:12:30.424 "state": "online", 00:12:30.424 "raid_level": "raid1", 00:12:30.424 "superblock": true, 00:12:30.424 "num_base_bdevs": 2, 00:12:30.424 "num_base_bdevs_discovered": 2, 00:12:30.424 "num_base_bdevs_operational": 2, 00:12:30.424 "base_bdevs_list": [ 00:12:30.424 { 00:12:30.424 "name": "spare", 00:12:30.424 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:30.424 "is_configured": true, 00:12:30.424 "data_offset": 2048, 00:12:30.424 "data_size": 63488 00:12:30.424 }, 00:12:30.424 { 00:12:30.424 "name": "BaseBdev2", 00:12:30.424 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:30.424 "is_configured": true, 00:12:30.424 "data_offset": 2048, 00:12:30.424 "data_size": 63488 00:12:30.424 } 00:12:30.424 ] 00:12:30.424 }' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.424 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.684 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.684 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.684 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.684 "name": "raid_bdev1", 00:12:30.684 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:30.684 "strip_size_kb": 0, 00:12:30.684 "state": "online", 00:12:30.684 "raid_level": "raid1", 00:12:30.684 "superblock": true, 00:12:30.684 "num_base_bdevs": 2, 00:12:30.684 "num_base_bdevs_discovered": 2, 00:12:30.684 "num_base_bdevs_operational": 2, 00:12:30.684 "base_bdevs_list": [ 00:12:30.684 { 00:12:30.684 "name": "spare", 00:12:30.684 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:30.684 "is_configured": true, 00:12:30.684 "data_offset": 2048, 00:12:30.684 "data_size": 63488 00:12:30.684 }, 00:12:30.684 { 00:12:30.684 "name": "BaseBdev2", 00:12:30.684 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:30.684 "is_configured": true, 00:12:30.684 "data_offset": 2048, 00:12:30.684 "data_size": 63488 00:12:30.684 } 00:12:30.684 ] 00:12:30.684 }' 00:12:30.684 15:21:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.684 15:21:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.944 [2024-11-10 15:21:37.250669] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.944 [2024-11-10 15:21:37.250752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.944 [2024-11-10 15:21:37.250875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.944 [2024-11-10 15:21:37.250964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.944 [2024-11-10 15:21:37.251021] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:30.944 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:31.204 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:31.205 /dev/nbd0 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:31.205 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.465 1+0 records in 00:12:31.465 1+0 records out 00:12:31.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217715 s, 18.8 MB/s 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:31.465 /dev/nbd1 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:31.465 1+0 records in 00:12:31.465 1+0 records out 00:12:31.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345358 s, 11.9 MB/s 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:31.465 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.725 15:21:37 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:31.725 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:31.985 [2024-11-10 15:21:38.292355] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:31.985 [2024-11-10 15:21:38.292419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.985 [2024-11-10 15:21:38.292446] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:31.985 [2024-11-10 15:21:38.292455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.985 [2024-11-10 15:21:38.294605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.985 [2024-11-10 15:21:38.294642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:31.985 [2024-11-10 15:21:38.294727] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:31.985 [2024-11-10 15:21:38.294764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.985 [2024-11-10 15:21:38.294873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:31.985 spare 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.985 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.245 [2024-11-10 15:21:38.394941] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.245 [2024-11-10 15:21:38.394982] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.245 [2024-11-10 15:21:38.395241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ef0 00:12:32.245 [2024-11-10 15:21:38.395416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.245 [2024-11-10 15:21:38.395434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.245 [2024-11-10 15:21:38.395563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.245 "name": "raid_bdev1", 00:12:32.245 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:32.245 "strip_size_kb": 0, 00:12:32.245 "state": "online", 00:12:32.245 "raid_level": "raid1", 00:12:32.245 "superblock": true, 00:12:32.245 "num_base_bdevs": 2, 00:12:32.245 "num_base_bdevs_discovered": 2, 00:12:32.245 "num_base_bdevs_operational": 2, 00:12:32.245 "base_bdevs_list": [ 00:12:32.245 { 00:12:32.245 "name": "spare", 00:12:32.245 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:32.245 "is_configured": true, 00:12:32.245 "data_offset": 2048, 00:12:32.245 "data_size": 63488 00:12:32.245 }, 00:12:32.245 { 00:12:32.245 "name": "BaseBdev2", 00:12:32.245 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:32.245 "is_configured": true, 00:12:32.245 "data_offset": 2048, 00:12:32.245 "data_size": 63488 00:12:32.245 } 00:12:32.245 ] 00:12:32.245 }' 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.245 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.512 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.784 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.784 "name": "raid_bdev1", 00:12:32.784 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:32.784 "strip_size_kb": 0, 00:12:32.784 "state": "online", 00:12:32.784 "raid_level": "raid1", 00:12:32.784 "superblock": true, 00:12:32.784 "num_base_bdevs": 2, 00:12:32.784 "num_base_bdevs_discovered": 2, 00:12:32.784 "num_base_bdevs_operational": 2, 00:12:32.784 "base_bdevs_list": [ 00:12:32.785 { 00:12:32.785 "name": "spare", 00:12:32.785 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:32.785 "is_configured": true, 00:12:32.785 "data_offset": 2048, 00:12:32.785 "data_size": 63488 00:12:32.785 }, 00:12:32.785 { 00:12:32.785 "name": "BaseBdev2", 00:12:32.785 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:32.785 "is_configured": true, 00:12:32.785 "data_offset": 2048, 00:12:32.785 "data_size": 63488 00:12:32.785 } 00:12:32.785 ] 00:12:32.785 }' 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.785 15:21:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 [2024-11-10 15:21:39.008639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.785 "name": "raid_bdev1", 00:12:32.785 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:32.785 "strip_size_kb": 0, 00:12:32.785 "state": "online", 00:12:32.785 "raid_level": "raid1", 00:12:32.785 "superblock": true, 00:12:32.785 "num_base_bdevs": 2, 00:12:32.785 "num_base_bdevs_discovered": 1, 00:12:32.785 "num_base_bdevs_operational": 1, 00:12:32.785 "base_bdevs_list": [ 00:12:32.785 { 00:12:32.785 "name": null, 00:12:32.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.785 "is_configured": false, 00:12:32.785 "data_offset": 0, 00:12:32.785 "data_size": 63488 00:12:32.785 }, 00:12:32.785 { 00:12:32.785 "name": "BaseBdev2", 00:12:32.785 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:32.785 "is_configured": true, 00:12:32.785 "data_offset": 2048, 00:12:32.785 "data_size": 63488 00:12:32.785 } 00:12:32.785 ] 00:12:32.785 }' 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.785 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.355 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.355 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.355 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:33.355 [2024-11-10 15:21:39.448786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.355 [2024-11-10 15:21:39.448970] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:33.355 [2024-11-10 15:21:39.448988] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:33.355 [2024-11-10 15:21:39.449046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.355 [2024-11-10 15:21:39.453845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1fc0 00:12:33.355 15:21:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.355 15:21:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:33.355 [2024-11-10 15:21:39.455885] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.292 "name": "raid_bdev1", 00:12:34.292 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:34.292 "strip_size_kb": 0, 00:12:34.292 "state": "online", 00:12:34.292 "raid_level": "raid1", 00:12:34.292 "superblock": true, 00:12:34.292 "num_base_bdevs": 2, 00:12:34.292 "num_base_bdevs_discovered": 2, 00:12:34.292 "num_base_bdevs_operational": 2, 00:12:34.292 "process": { 00:12:34.292 "type": "rebuild", 00:12:34.292 "target": "spare", 00:12:34.292 "progress": { 00:12:34.292 "blocks": 20480, 00:12:34.292 "percent": 32 00:12:34.292 } 00:12:34.292 }, 00:12:34.292 "base_bdevs_list": [ 00:12:34.292 { 00:12:34.292 "name": "spare", 00:12:34.292 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:34.292 "is_configured": true, 00:12:34.292 "data_offset": 2048, 00:12:34.292 "data_size": 63488 00:12:34.292 }, 00:12:34.292 { 00:12:34.292 "name": "BaseBdev2", 00:12:34.292 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:34.292 "is_configured": true, 00:12:34.292 "data_offset": 2048, 00:12:34.292 "data_size": 63488 00:12:34.292 } 00:12:34.292 ] 00:12:34.292 }' 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.292 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.292 [2024-11-10 15:21:40.602471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.552 [2024-11-10 15:21:40.662917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.552 [2024-11-10 15:21:40.663035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.552 [2024-11-10 15:21:40.663051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.552 [2024-11-10 15:21:40.663061] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.552 "name": "raid_bdev1", 00:12:34.552 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:34.552 "strip_size_kb": 0, 00:12:34.552 "state": "online", 00:12:34.552 "raid_level": "raid1", 00:12:34.552 "superblock": true, 00:12:34.552 "num_base_bdevs": 2, 00:12:34.552 "num_base_bdevs_discovered": 1, 00:12:34.552 "num_base_bdevs_operational": 1, 00:12:34.552 "base_bdevs_list": [ 00:12:34.552 { 00:12:34.552 "name": null, 00:12:34.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.552 "is_configured": false, 00:12:34.552 "data_offset": 0, 00:12:34.552 "data_size": 63488 00:12:34.552 }, 00:12:34.552 { 00:12:34.552 "name": "BaseBdev2", 00:12:34.552 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:34.552 "is_configured": true, 00:12:34.552 "data_offset": 2048, 00:12:34.552 "data_size": 63488 00:12:34.552 } 00:12:34.552 ] 00:12:34.552 }' 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.552 15:21:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.811 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:34.811 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.811 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.811 [2024-11-10 15:21:41.092007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:34.811 [2024-11-10 15:21:41.092099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:34.811 [2024-11-10 15:21:41.092122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:34.812 [2024-11-10 15:21:41.092133] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:34.812 [2024-11-10 15:21:41.092641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:34.812 [2024-11-10 15:21:41.092672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:34.812 [2024-11-10 15:21:41.092766] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:34.812 [2024-11-10 15:21:41.092789] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:34.812 [2024-11-10 15:21:41.092798] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:34.812 [2024-11-10 15:21:41.092820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:34.812 [2024-11-10 15:21:41.097740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:12:34.812 spare 00:12:34.812 15:21:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.812 15:21:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:34.812 [2024-11-10 15:21:41.099725] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.750 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.011 "name": "raid_bdev1", 00:12:36.011 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:36.011 "strip_size_kb": 0, 00:12:36.011 "state": "online", 00:12:36.011 "raid_level": "raid1", 00:12:36.011 "superblock": true, 00:12:36.011 "num_base_bdevs": 2, 00:12:36.011 "num_base_bdevs_discovered": 2, 00:12:36.011 "num_base_bdevs_operational": 2, 00:12:36.011 "process": { 00:12:36.011 "type": "rebuild", 00:12:36.011 "target": "spare", 00:12:36.011 "progress": { 00:12:36.011 "blocks": 20480, 00:12:36.011 "percent": 32 00:12:36.011 } 00:12:36.011 }, 00:12:36.011 "base_bdevs_list": [ 00:12:36.011 { 00:12:36.011 "name": "spare", 00:12:36.011 "uuid": "7335adb9-dee3-501c-bc7f-3ce0934801c6", 00:12:36.011 "is_configured": true, 00:12:36.011 "data_offset": 2048, 00:12:36.011 "data_size": 63488 00:12:36.011 }, 00:12:36.011 { 00:12:36.011 "name": "BaseBdev2", 00:12:36.011 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:36.011 "is_configured": true, 00:12:36.011 "data_offset": 2048, 00:12:36.011 "data_size": 63488 00:12:36.011 } 00:12:36.011 ] 00:12:36.011 }' 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.011 [2024-11-10 15:21:42.230322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.011 [2024-11-10 15:21:42.306780] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:36.011 [2024-11-10 15:21:42.306875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:36.011 [2024-11-10 15:21:42.306910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:36.011 [2024-11-10 15:21:42.306917] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.011 "name": "raid_bdev1", 00:12:36.011 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:36.011 "strip_size_kb": 0, 00:12:36.011 "state": "online", 00:12:36.011 "raid_level": "raid1", 00:12:36.011 "superblock": true, 00:12:36.011 "num_base_bdevs": 2, 00:12:36.011 "num_base_bdevs_discovered": 1, 00:12:36.011 "num_base_bdevs_operational": 1, 00:12:36.011 "base_bdevs_list": [ 00:12:36.011 { 00:12:36.011 "name": null, 00:12:36.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.011 "is_configured": false, 00:12:36.011 "data_offset": 0, 00:12:36.011 "data_size": 63488 00:12:36.011 }, 00:12:36.011 { 00:12:36.011 "name": "BaseBdev2", 00:12:36.011 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:36.011 "is_configured": true, 00:12:36.011 "data_offset": 2048, 00:12:36.011 "data_size": 63488 00:12:36.011 } 00:12:36.011 ] 00:12:36.011 }' 00:12:36.011 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.272 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.532 "name": "raid_bdev1", 00:12:36.532 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:36.532 "strip_size_kb": 0, 00:12:36.532 "state": "online", 00:12:36.532 "raid_level": "raid1", 00:12:36.532 "superblock": true, 00:12:36.532 "num_base_bdevs": 2, 00:12:36.532 "num_base_bdevs_discovered": 1, 00:12:36.532 "num_base_bdevs_operational": 1, 00:12:36.532 "base_bdevs_list": [ 00:12:36.532 { 00:12:36.532 "name": null, 00:12:36.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.532 "is_configured": false, 00:12:36.532 "data_offset": 0, 00:12:36.532 "data_size": 63488 00:12:36.532 }, 00:12:36.532 { 00:12:36.532 "name": "BaseBdev2", 00:12:36.532 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:36.532 "is_configured": true, 00:12:36.532 "data_offset": 2048, 00:12:36.532 "data_size": 63488 00:12:36.532 } 00:12:36.532 ] 00:12:36.532 }' 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:36.532 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.793 [2024-11-10 15:21:42.916056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:36.793 [2024-11-10 15:21:42.916126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:36.793 [2024-11-10 15:21:42.916150] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:36.793 [2024-11-10 15:21:42.916160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:36.793 [2024-11-10 15:21:42.916579] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:36.793 [2024-11-10 15:21:42.916607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:36.793 [2024-11-10 15:21:42.916701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:36.793 [2024-11-10 15:21:42.916728] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:36.793 [2024-11-10 15:21:42.916745] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:36.793 [2024-11-10 15:21:42.916759] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:36.793 BaseBdev1 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.793 15:21:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.733 "name": "raid_bdev1", 00:12:37.733 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:37.733 "strip_size_kb": 0, 00:12:37.733 "state": "online", 00:12:37.733 "raid_level": "raid1", 00:12:37.733 "superblock": true, 00:12:37.733 "num_base_bdevs": 2, 00:12:37.733 "num_base_bdevs_discovered": 1, 00:12:37.733 "num_base_bdevs_operational": 1, 00:12:37.733 "base_bdevs_list": [ 00:12:37.733 { 00:12:37.733 "name": null, 00:12:37.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.733 "is_configured": false, 00:12:37.733 "data_offset": 0, 00:12:37.733 "data_size": 63488 00:12:37.733 }, 00:12:37.733 { 00:12:37.733 "name": "BaseBdev2", 00:12:37.733 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:37.733 "is_configured": true, 00:12:37.733 "data_offset": 2048, 00:12:37.733 "data_size": 63488 00:12:37.733 } 00:12:37.733 ] 00:12:37.733 }' 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.733 15:21:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.303 "name": "raid_bdev1", 00:12:38.303 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:38.303 "strip_size_kb": 0, 00:12:38.303 "state": "online", 00:12:38.303 "raid_level": "raid1", 00:12:38.303 "superblock": true, 00:12:38.303 "num_base_bdevs": 2, 00:12:38.303 "num_base_bdevs_discovered": 1, 00:12:38.303 "num_base_bdevs_operational": 1, 00:12:38.303 "base_bdevs_list": [ 00:12:38.303 { 00:12:38.303 "name": null, 00:12:38.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.303 "is_configured": false, 00:12:38.303 "data_offset": 0, 00:12:38.303 "data_size": 63488 00:12:38.303 }, 00:12:38.303 { 00:12:38.303 "name": "BaseBdev2", 00:12:38.303 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:38.303 "is_configured": true, 00:12:38.303 "data_offset": 2048, 00:12:38.303 "data_size": 63488 00:12:38.303 } 00:12:38.303 ] 00:12:38.303 }' 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.303 [2024-11-10 15:21:44.504514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.303 [2024-11-10 15:21:44.504684] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:38.303 [2024-11-10 15:21:44.504699] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:38.303 request: 00:12:38.303 { 00:12:38.303 "base_bdev": "BaseBdev1", 00:12:38.303 "raid_bdev": "raid_bdev1", 00:12:38.303 "method": "bdev_raid_add_base_bdev", 00:12:38.303 "req_id": 1 00:12:38.303 } 00:12:38.303 Got JSON-RPC error response 00:12:38.303 response: 00:12:38.303 { 00:12:38.303 "code": -22, 00:12:38.303 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:38.303 } 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:12:38.303 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:38.304 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:38.304 15:21:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:38.304 15:21:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.244 "name": "raid_bdev1", 00:12:39.244 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:39.244 "strip_size_kb": 0, 00:12:39.244 "state": "online", 00:12:39.244 "raid_level": "raid1", 00:12:39.244 "superblock": true, 00:12:39.244 "num_base_bdevs": 2, 00:12:39.244 "num_base_bdevs_discovered": 1, 00:12:39.244 "num_base_bdevs_operational": 1, 00:12:39.244 "base_bdevs_list": [ 00:12:39.244 { 00:12:39.244 "name": null, 00:12:39.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.244 "is_configured": false, 00:12:39.244 "data_offset": 0, 00:12:39.244 "data_size": 63488 00:12:39.244 }, 00:12:39.244 { 00:12:39.244 "name": "BaseBdev2", 00:12:39.244 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:39.244 "is_configured": true, 00:12:39.244 "data_offset": 2048, 00:12:39.244 "data_size": 63488 00:12:39.244 } 00:12:39.244 ] 00:12:39.244 }' 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.244 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.814 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:39.814 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.814 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.815 15:21:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.815 "name": "raid_bdev1", 00:12:39.815 "uuid": "b1053e65-d57f-4a43-a60b-e645e23cb6ce", 00:12:39.815 "strip_size_kb": 0, 00:12:39.815 "state": "online", 00:12:39.815 "raid_level": "raid1", 00:12:39.815 "superblock": true, 00:12:39.815 "num_base_bdevs": 2, 00:12:39.815 "num_base_bdevs_discovered": 1, 00:12:39.815 "num_base_bdevs_operational": 1, 00:12:39.815 "base_bdevs_list": [ 00:12:39.815 { 00:12:39.815 "name": null, 00:12:39.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.815 "is_configured": false, 00:12:39.815 "data_offset": 0, 00:12:39.815 "data_size": 63488 00:12:39.815 }, 00:12:39.815 { 00:12:39.815 "name": "BaseBdev2", 00:12:39.815 "uuid": "0fe92d29-4b81-5507-ac7f-20ae4adb1b59", 00:12:39.815 "is_configured": true, 00:12:39.815 "data_offset": 2048, 00:12:39.815 "data_size": 63488 00:12:39.815 } 00:12:39.815 ] 00:12:39.815 }' 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 87820 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 87820 ']' 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 87820 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 87820 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:39.815 killing process with pid 87820 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 87820' 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 87820 00:12:39.815 Received shutdown signal, test time was about 60.000000 seconds 00:12:39.815 00:12:39.815 Latency(us) 00:12:39.815 [2024-11-10T15:21:46.178Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.815 [2024-11-10T15:21:46.178Z] =================================================================================================================== 00:12:39.815 [2024-11-10T15:21:46.178Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:39.815 [2024-11-10 15:21:46.154860] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:39.815 [2024-11-10 15:21:46.154990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.815 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 87820 00:12:39.815 [2024-11-10 15:21:46.155060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.815 [2024-11-10 15:21:46.155074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:40.075 [2024-11-10 15:21:46.187360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:40.075 15:21:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:40.075 00:12:40.075 real 0m21.866s 00:12:40.075 user 0m26.887s 00:12:40.075 sys 0m3.682s 00:12:40.075 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.075 15:21:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.075 ************************************ 00:12:40.075 END TEST raid_rebuild_test_sb 00:12:40.075 ************************************ 00:12:40.336 15:21:46 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:40.336 15:21:46 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:40.336 15:21:46 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.336 15:21:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:40.336 ************************************ 00:12:40.336 START TEST raid_rebuild_test_io 00:12:40.336 ************************************ 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 false true true 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88536 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88536 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 88536 ']' 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.336 15:21:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.336 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:40.336 Zero copy mechanism will not be used. 00:12:40.336 [2024-11-10 15:21:46.571943] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:12:40.336 [2024-11-10 15:21:46.572078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88536 ] 00:12:40.600 [2024-11-10 15:21:46.704959] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:40.600 [2024-11-10 15:21:46.745309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:40.600 [2024-11-10 15:21:46.770700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.600 [2024-11-10 15:21:46.813545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.600 [2024-11-10 15:21:46.813594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 BaseBdev1_malloc 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 [2024-11-10 15:21:47.425721] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:41.176 [2024-11-10 15:21:47.425791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.176 [2024-11-10 15:21:47.425836] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:41.176 [2024-11-10 15:21:47.425858] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.176 [2024-11-10 15:21:47.428300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.176 [2024-11-10 15:21:47.428349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:41.176 BaseBdev1 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 BaseBdev2_malloc 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 [2024-11-10 15:21:47.447136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:41.176 [2024-11-10 15:21:47.447194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.176 [2024-11-10 15:21:47.447212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:41.176 [2024-11-10 15:21:47.447222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.176 [2024-11-10 15:21:47.449503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.176 [2024-11-10 15:21:47.449543] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:41.176 BaseBdev2 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 spare_malloc 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 spare_delay 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 [2024-11-10 15:21:47.484304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:41.176 [2024-11-10 15:21:47.484453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:41.176 [2024-11-10 15:21:47.484483] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:41.176 [2024-11-10 15:21:47.484497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:41.176 [2024-11-10 15:21:47.486815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:41.176 [2024-11-10 15:21:47.486858] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:41.176 spare 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.176 [2024-11-10 15:21:47.496387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.176 [2024-11-10 15:21:47.498382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.176 [2024-11-10 15:21:47.498492] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:41.176 [2024-11-10 15:21:47.498504] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:41.176 [2024-11-10 15:21:47.498801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:41.176 [2024-11-10 15:21:47.498939] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:41.176 [2024-11-10 15:21:47.498954] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:41.176 [2024-11-10 15:21:47.499121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.176 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.177 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.436 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.436 "name": "raid_bdev1", 00:12:41.436 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:41.436 "strip_size_kb": 0, 00:12:41.436 "state": "online", 00:12:41.436 "raid_level": "raid1", 00:12:41.436 "superblock": false, 00:12:41.436 "num_base_bdevs": 2, 00:12:41.436 "num_base_bdevs_discovered": 2, 00:12:41.436 "num_base_bdevs_operational": 2, 00:12:41.436 "base_bdevs_list": [ 00:12:41.436 { 00:12:41.436 "name": "BaseBdev1", 00:12:41.436 "uuid": "18fbeb6e-b5a7-5235-bb93-34b366b43146", 00:12:41.436 "is_configured": true, 00:12:41.436 "data_offset": 0, 00:12:41.436 "data_size": 65536 00:12:41.436 }, 00:12:41.436 { 00:12:41.436 "name": "BaseBdev2", 00:12:41.436 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:41.436 "is_configured": true, 00:12:41.436 "data_offset": 0, 00:12:41.436 "data_size": 65536 00:12:41.436 } 00:12:41.436 ] 00:12:41.436 }' 00:12:41.436 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.436 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.695 [2024-11-10 15:21:47.912926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:41.695 15:21:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.695 [2024-11-10 15:21:48.004464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.695 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.955 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.955 "name": "raid_bdev1", 00:12:41.955 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:41.955 "strip_size_kb": 0, 00:12:41.955 "state": "online", 00:12:41.955 "raid_level": "raid1", 00:12:41.955 "superblock": false, 00:12:41.955 "num_base_bdevs": 2, 00:12:41.955 "num_base_bdevs_discovered": 1, 00:12:41.955 "num_base_bdevs_operational": 1, 00:12:41.955 "base_bdevs_list": [ 00:12:41.955 { 00:12:41.955 "name": null, 00:12:41.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.955 "is_configured": false, 00:12:41.955 "data_offset": 0, 00:12:41.955 "data_size": 65536 00:12:41.955 }, 00:12:41.955 { 00:12:41.955 "name": "BaseBdev2", 00:12:41.955 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:41.955 "is_configured": true, 00:12:41.955 "data_offset": 0, 00:12:41.955 "data_size": 65536 00:12:41.955 } 00:12:41.955 ] 00:12:41.955 }' 00:12:41.955 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.955 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.955 [2024-11-10 15:21:48.091905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:41.955 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.955 Zero copy mechanism will not be used. 00:12:41.955 Running I/O for 60 seconds... 00:12:42.214 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:42.214 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.214 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.214 [2024-11-10 15:21:48.428294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.214 15:21:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.214 15:21:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:42.214 [2024-11-10 15:21:48.482582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:42.214 [2024-11-10 15:21:48.484818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:42.474 [2024-11-10 15:21:48.603631] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:42.474 [2024-11-10 15:21:48.604359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:42.474 [2024-11-10 15:21:48.813314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:42.474 [2024-11-10 15:21:48.813577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:42.733 [2024-11-10 15:21:49.038409] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:42.993 162.00 IOPS, 486.00 MiB/s [2024-11-10T15:21:49.356Z] [2024-11-10 15:21:49.264245] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:43.253 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.254 [2024-11-10 15:21:49.497893] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:43.254 "name": "raid_bdev1", 00:12:43.254 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:43.254 "strip_size_kb": 0, 00:12:43.254 "state": "online", 00:12:43.254 "raid_level": "raid1", 00:12:43.254 "superblock": false, 00:12:43.254 "num_base_bdevs": 2, 00:12:43.254 "num_base_bdevs_discovered": 2, 00:12:43.254 "num_base_bdevs_operational": 2, 00:12:43.254 "process": { 00:12:43.254 "type": "rebuild", 00:12:43.254 "target": "spare", 00:12:43.254 "progress": { 00:12:43.254 "blocks": 12288, 00:12:43.254 "percent": 18 00:12:43.254 } 00:12:43.254 }, 00:12:43.254 "base_bdevs_list": [ 00:12:43.254 { 00:12:43.254 "name": "spare", 00:12:43.254 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:43.254 "is_configured": true, 00:12:43.254 "data_offset": 0, 00:12:43.254 "data_size": 65536 00:12:43.254 }, 00:12:43.254 { 00:12:43.254 "name": "BaseBdev2", 00:12:43.254 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:43.254 "is_configured": true, 00:12:43.254 "data_offset": 0, 00:12:43.254 "data_size": 65536 00:12:43.254 } 00:12:43.254 ] 00:12:43.254 }' 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:43.254 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:43.513 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:43.513 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:43.513 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.513 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.513 [2024-11-10 15:21:49.621712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.513 [2024-11-10 15:21:49.831096] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:43.513 [2024-11-10 15:21:49.839548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.513 [2024-11-10 15:21:49.839611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.513 [2024-11-10 15:21:49.839625] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:43.513 [2024-11-10 15:21:49.865547] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.773 "name": "raid_bdev1", 00:12:43.773 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:43.773 "strip_size_kb": 0, 00:12:43.773 "state": "online", 00:12:43.773 "raid_level": "raid1", 00:12:43.773 "superblock": false, 00:12:43.773 "num_base_bdevs": 2, 00:12:43.773 "num_base_bdevs_discovered": 1, 00:12:43.773 "num_base_bdevs_operational": 1, 00:12:43.773 "base_bdevs_list": [ 00:12:43.773 { 00:12:43.773 "name": null, 00:12:43.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.773 "is_configured": false, 00:12:43.773 "data_offset": 0, 00:12:43.773 "data_size": 65536 00:12:43.773 }, 00:12:43.773 { 00:12:43.773 "name": "BaseBdev2", 00:12:43.773 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:43.773 "is_configured": true, 00:12:43.773 "data_offset": 0, 00:12:43.773 "data_size": 65536 00:12:43.773 } 00:12:43.773 ] 00:12:43.773 }' 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.773 15:21:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.033 154.50 IOPS, 463.50 MiB/s [2024-11-10T15:21:50.396Z] 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.033 "name": "raid_bdev1", 00:12:44.033 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:44.033 "strip_size_kb": 0, 00:12:44.033 "state": "online", 00:12:44.033 "raid_level": "raid1", 00:12:44.033 "superblock": false, 00:12:44.033 "num_base_bdevs": 2, 00:12:44.033 "num_base_bdevs_discovered": 1, 00:12:44.033 "num_base_bdevs_operational": 1, 00:12:44.033 "base_bdevs_list": [ 00:12:44.033 { 00:12:44.033 "name": null, 00:12:44.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.033 "is_configured": false, 00:12:44.033 "data_offset": 0, 00:12:44.033 "data_size": 65536 00:12:44.033 }, 00:12:44.033 { 00:12:44.033 "name": "BaseBdev2", 00:12:44.033 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:44.033 "is_configured": true, 00:12:44.033 "data_offset": 0, 00:12:44.033 "data_size": 65536 00:12:44.033 } 00:12:44.033 ] 00:12:44.033 }' 00:12:44.033 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.294 [2024-11-10 15:21:50.483758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.294 15:21:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:44.294 [2024-11-10 15:21:50.520810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:44.294 [2024-11-10 15:21:50.523026] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.294 [2024-11-10 15:21:50.630323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.294 [2024-11-10 15:21:50.630767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.554 [2024-11-10 15:21:50.840466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.554 [2024-11-10 15:21:50.840949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.813 [2024-11-10 15:21:51.085545] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:45.073 169.00 IOPS, 507.00 MiB/s [2024-11-10T15:21:51.436Z] [2024-11-10 15:21:51.201739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.333 "name": "raid_bdev1", 00:12:45.333 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:45.333 "strip_size_kb": 0, 00:12:45.333 "state": "online", 00:12:45.333 "raid_level": "raid1", 00:12:45.333 "superblock": false, 00:12:45.333 "num_base_bdevs": 2, 00:12:45.333 "num_base_bdevs_discovered": 2, 00:12:45.333 "num_base_bdevs_operational": 2, 00:12:45.333 "process": { 00:12:45.333 "type": "rebuild", 00:12:45.333 "target": "spare", 00:12:45.333 "progress": { 00:12:45.333 "blocks": 14336, 00:12:45.333 "percent": 21 00:12:45.333 } 00:12:45.333 }, 00:12:45.333 "base_bdevs_list": [ 00:12:45.333 { 00:12:45.333 "name": "spare", 00:12:45.333 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:45.333 "is_configured": true, 00:12:45.333 "data_offset": 0, 00:12:45.333 "data_size": 65536 00:12:45.333 }, 00:12:45.333 { 00:12:45.333 "name": "BaseBdev2", 00:12:45.333 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:45.333 "is_configured": true, 00:12:45.333 "data_offset": 0, 00:12:45.333 "data_size": 65536 00:12:45.333 } 00:12:45.333 ] 00:12:45.333 }' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=325 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.333 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.593 "name": "raid_bdev1", 00:12:45.593 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:45.593 "strip_size_kb": 0, 00:12:45.593 "state": "online", 00:12:45.593 "raid_level": "raid1", 00:12:45.593 "superblock": false, 00:12:45.593 "num_base_bdevs": 2, 00:12:45.593 "num_base_bdevs_discovered": 2, 00:12:45.593 "num_base_bdevs_operational": 2, 00:12:45.593 "process": { 00:12:45.593 "type": "rebuild", 00:12:45.593 "target": "spare", 00:12:45.593 "progress": { 00:12:45.593 "blocks": 18432, 00:12:45.593 "percent": 28 00:12:45.593 } 00:12:45.593 }, 00:12:45.593 "base_bdevs_list": [ 00:12:45.593 { 00:12:45.593 "name": "spare", 00:12:45.593 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:45.593 "is_configured": true, 00:12:45.593 "data_offset": 0, 00:12:45.593 "data_size": 65536 00:12:45.593 }, 00:12:45.593 { 00:12:45.593 "name": "BaseBdev2", 00:12:45.593 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:45.593 "is_configured": true, 00:12:45.593 "data_offset": 0, 00:12:45.593 "data_size": 65536 00:12:45.593 } 00:12:45.593 ] 00:12:45.593 }' 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:45.593 15:21:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:45.593 [2024-11-10 15:21:51.793821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:45.593 [2024-11-10 15:21:51.900769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:45.594 [2024-11-10 15:21:51.901108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:46.113 141.25 IOPS, 423.75 MiB/s [2024-11-10T15:21:52.476Z] [2024-11-10 15:21:52.235996] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:46.113 [2024-11-10 15:21:52.343636] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:46.372 [2024-11-10 15:21:52.680114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.631 "name": "raid_bdev1", 00:12:46.631 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:46.631 "strip_size_kb": 0, 00:12:46.631 "state": "online", 00:12:46.631 "raid_level": "raid1", 00:12:46.631 "superblock": false, 00:12:46.631 "num_base_bdevs": 2, 00:12:46.631 "num_base_bdevs_discovered": 2, 00:12:46.631 "num_base_bdevs_operational": 2, 00:12:46.631 "process": { 00:12:46.631 "type": "rebuild", 00:12:46.631 "target": "spare", 00:12:46.631 "progress": { 00:12:46.631 "blocks": 32768, 00:12:46.631 "percent": 50 00:12:46.631 } 00:12:46.631 }, 00:12:46.631 "base_bdevs_list": [ 00:12:46.631 { 00:12:46.631 "name": "spare", 00:12:46.631 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:46.631 "is_configured": true, 00:12:46.631 "data_offset": 0, 00:12:46.631 "data_size": 65536 00:12:46.631 }, 00:12:46.631 { 00:12:46.631 "name": "BaseBdev2", 00:12:46.631 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:46.631 "is_configured": true, 00:12:46.631 "data_offset": 0, 00:12:46.631 "data_size": 65536 00:12:46.631 } 00:12:46.631 ] 00:12:46.631 }' 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.631 [2024-11-10 15:21:52.897008] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.631 15:21:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.895 121.80 IOPS, 365.40 MiB/s [2024-11-10T15:21:53.258Z] [2024-11-10 15:21:53.120431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:47.156 [2024-11-10 15:21:53.266792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:47.414 [2024-11-10 15:21:53.699651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.676 "name": "raid_bdev1", 00:12:47.676 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:47.676 "strip_size_kb": 0, 00:12:47.676 "state": "online", 00:12:47.676 "raid_level": "raid1", 00:12:47.676 "superblock": false, 00:12:47.676 "num_base_bdevs": 2, 00:12:47.676 "num_base_bdevs_discovered": 2, 00:12:47.676 "num_base_bdevs_operational": 2, 00:12:47.676 "process": { 00:12:47.676 "type": "rebuild", 00:12:47.676 "target": "spare", 00:12:47.676 "progress": { 00:12:47.676 "blocks": 49152, 00:12:47.676 "percent": 75 00:12:47.676 } 00:12:47.676 }, 00:12:47.676 "base_bdevs_list": [ 00:12:47.676 { 00:12:47.676 "name": "spare", 00:12:47.676 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:47.676 "is_configured": true, 00:12:47.676 "data_offset": 0, 00:12:47.676 "data_size": 65536 00:12:47.676 }, 00:12:47.676 { 00:12:47.676 "name": "BaseBdev2", 00:12:47.676 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:47.676 "is_configured": true, 00:12:47.676 "data_offset": 0, 00:12:47.676 "data_size": 65536 00:12:47.676 } 00:12:47.676 ] 00:12:47.676 }' 00:12:47.676 15:21:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.676 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:47.936 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.936 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:47.936 15:21:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.505 109.00 IOPS, 327.00 MiB/s [2024-11-10T15:21:54.868Z] [2024-11-10 15:21:54.786296] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:48.765 [2024-11-10 15:21:54.891628] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:48.765 [2024-11-10 15:21:54.893467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.765 97.86 IOPS, 293.57 MiB/s [2024-11-10T15:21:55.128Z] 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.765 "name": "raid_bdev1", 00:12:48.765 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:48.765 "strip_size_kb": 0, 00:12:48.765 "state": "online", 00:12:48.765 "raid_level": "raid1", 00:12:48.765 "superblock": false, 00:12:48.765 "num_base_bdevs": 2, 00:12:48.765 "num_base_bdevs_discovered": 2, 00:12:48.765 "num_base_bdevs_operational": 2, 00:12:48.765 "base_bdevs_list": [ 00:12:48.765 { 00:12:48.765 "name": "spare", 00:12:48.765 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:48.765 "is_configured": true, 00:12:48.765 "data_offset": 0, 00:12:48.765 "data_size": 65536 00:12:48.765 }, 00:12:48.765 { 00:12:48.765 "name": "BaseBdev2", 00:12:48.765 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:48.765 "is_configured": true, 00:12:48.765 "data_offset": 0, 00:12:48.765 "data_size": 65536 00:12:48.765 } 00:12:48.765 ] 00:12:48.765 }' 00:12:48.765 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.025 "name": "raid_bdev1", 00:12:49.025 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:49.025 "strip_size_kb": 0, 00:12:49.025 "state": "online", 00:12:49.025 "raid_level": "raid1", 00:12:49.025 "superblock": false, 00:12:49.025 "num_base_bdevs": 2, 00:12:49.025 "num_base_bdevs_discovered": 2, 00:12:49.025 "num_base_bdevs_operational": 2, 00:12:49.025 "base_bdevs_list": [ 00:12:49.025 { 00:12:49.025 "name": "spare", 00:12:49.025 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:49.025 "is_configured": true, 00:12:49.025 "data_offset": 0, 00:12:49.025 "data_size": 65536 00:12:49.025 }, 00:12:49.025 { 00:12:49.025 "name": "BaseBdev2", 00:12:49.025 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:49.025 "is_configured": true, 00:12:49.025 "data_offset": 0, 00:12:49.025 "data_size": 65536 00:12:49.025 } 00:12:49.025 ] 00:12:49.025 }' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.025 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.285 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.285 "name": "raid_bdev1", 00:12:49.285 "uuid": "5d5b2c82-42fe-4a9e-a71d-36aa1747713f", 00:12:49.285 "strip_size_kb": 0, 00:12:49.285 "state": "online", 00:12:49.285 "raid_level": "raid1", 00:12:49.285 "superblock": false, 00:12:49.285 "num_base_bdevs": 2, 00:12:49.285 "num_base_bdevs_discovered": 2, 00:12:49.285 "num_base_bdevs_operational": 2, 00:12:49.285 "base_bdevs_list": [ 00:12:49.285 { 00:12:49.285 "name": "spare", 00:12:49.285 "uuid": "a66c8c46-1fce-5e85-8f60-7b704b5bbce4", 00:12:49.285 "is_configured": true, 00:12:49.285 "data_offset": 0, 00:12:49.285 "data_size": 65536 00:12:49.285 }, 00:12:49.285 { 00:12:49.285 "name": "BaseBdev2", 00:12:49.285 "uuid": "e3ed8a92-eb7d-5b0f-9f08-2a398b198691", 00:12:49.285 "is_configured": true, 00:12:49.285 "data_offset": 0, 00:12:49.285 "data_size": 65536 00:12:49.285 } 00:12:49.285 ] 00:12:49.285 }' 00:12:49.285 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.285 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.545 [2024-11-10 15:21:55.816259] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:49.545 [2024-11-10 15:21:55.816303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:49.545 00:12:49.545 Latency(us) 00:12:49.545 [2024-11-10T15:21:55.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:49.545 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:49.545 raid_bdev1 : 7.77 92.54 277.63 0.00 0.00 14583.11 274.90 115157.83 00:12:49.545 [2024-11-10T15:21:55.908Z] =================================================================================================================== 00:12:49.545 [2024-11-10T15:21:55.908Z] Total : 92.54 277.63 0.00 0.00 14583.11 274.90 115157.83 00:12:49.545 [2024-11-10 15:21:55.867893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.545 [2024-11-10 15:21:55.867949] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:49.545 [2024-11-10 15:21:55.868053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:49.545 [2024-11-10 15:21:55.868069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:12:49.545 { 00:12:49.545 "results": [ 00:12:49.545 { 00:12:49.545 "job": "raid_bdev1", 00:12:49.545 "core_mask": "0x1", 00:12:49.545 "workload": "randrw", 00:12:49.545 "percentage": 50, 00:12:49.545 "status": "finished", 00:12:49.545 "queue_depth": 2, 00:12:49.545 "io_size": 3145728, 00:12:49.545 "runtime": 7.769255, 00:12:49.545 "iops": 92.54426582728975, 00:12:49.545 "mibps": 277.63279748186926, 00:12:49.545 "io_failed": 0, 00:12:49.545 "io_timeout": 0, 00:12:49.545 "avg_latency_us": 14583.105761886134, 00:12:49.545 "min_latency_us": 274.8993288590604, 00:12:49.545 "max_latency_us": 115157.82794386821 00:12:49.545 } 00:12:49.545 ], 00:12:49.545 "core_count": 1 00:12:49.545 } 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.545 15:21:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.805 15:21:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:49.805 /dev/nbd0 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:49.805 1+0 records in 00:12:49.805 1+0 records out 00:12:49.805 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469105 s, 8.7 MB/s 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:49.805 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:50.066 /dev/nbd1 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:50.066 1+0 records in 00:12:50.066 1+0 records out 00:12:50.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387192 s, 10.6 MB/s 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:50.066 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:50.326 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 88536 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 88536 ']' 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 88536 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:50.586 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88536 00:12:50.845 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:50.845 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:50.845 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88536' 00:12:50.845 killing process with pid 88536 00:12:50.845 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 88536 00:12:50.845 Received shutdown signal, test time was about 8.871540 seconds 00:12:50.845 00:12:50.845 Latency(us) 00:12:50.845 [2024-11-10T15:21:57.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.845 [2024-11-10T15:21:57.208Z] =================================================================================================================== 00:12:50.845 [2024-11-10T15:21:57.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.845 [2024-11-10 15:21:56.966549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.845 15:21:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 88536 00:12:50.845 [2024-11-10 15:21:56.992877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:50.845 15:21:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:50.845 00:12:50.845 real 0m10.729s 00:12:50.845 user 0m13.825s 00:12:50.845 sys 0m1.389s 00:12:50.845 15:21:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:50.845 15:21:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.845 ************************************ 00:12:50.845 END TEST raid_rebuild_test_io 00:12:50.846 ************************************ 00:12:51.105 15:21:57 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:51.105 15:21:57 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:12:51.105 15:21:57 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:51.105 15:21:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:51.105 ************************************ 00:12:51.105 START TEST raid_rebuild_test_sb_io 00:12:51.105 ************************************ 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true true true 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:51.105 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=88895 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 88895 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 88895 ']' 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:51.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:51.106 15:21:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.106 [2024-11-10 15:21:57.366157] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:12:51.106 [2024-11-10 15:21:57.366278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88895 ] 00:12:51.106 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:51.106 Zero copy mechanism will not be used. 00:12:51.365 [2024-11-10 15:21:57.498573] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.365 [2024-11-10 15:21:57.539031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.365 [2024-11-10 15:21:57.568333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.365 [2024-11-10 15:21:57.612553] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.365 [2024-11-10 15:21:57.612603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 BaseBdev1_malloc 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 [2024-11-10 15:21:58.205845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:51.936 [2024-11-10 15:21:58.205956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.936 [2024-11-10 15:21:58.205988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:51.936 [2024-11-10 15:21:58.206005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.936 [2024-11-10 15:21:58.208565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.936 [2024-11-10 15:21:58.208622] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:51.936 BaseBdev1 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 BaseBdev2_malloc 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 [2024-11-10 15:21:58.235145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:51.936 [2024-11-10 15:21:58.235231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.936 [2024-11-10 15:21:58.235272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:51.936 [2024-11-10 15:21:58.235285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.936 [2024-11-10 15:21:58.237560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.936 [2024-11-10 15:21:58.237613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:51.936 BaseBdev2 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 spare_malloc 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.936 spare_delay 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.936 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.937 [2024-11-10 15:21:58.276229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:51.937 [2024-11-10 15:21:58.276317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.937 [2024-11-10 15:21:58.276352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:51.937 [2024-11-10 15:21:58.276373] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.937 [2024-11-10 15:21:58.278843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.937 [2024-11-10 15:21:58.278889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:51.937 spare 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.937 [2024-11-10 15:21:58.288340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.937 [2024-11-10 15:21:58.290461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.937 [2024-11-10 15:21:58.290642] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:12:51.937 [2024-11-10 15:21:58.290658] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.937 [2024-11-10 15:21:58.290970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:51.937 [2024-11-10 15:21:58.291176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:12:51.937 [2024-11-10 15:21:58.291194] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:12:51.937 [2024-11-10 15:21:58.291373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.937 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.197 "name": "raid_bdev1", 00:12:52.197 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:52.197 "strip_size_kb": 0, 00:12:52.197 "state": "online", 00:12:52.197 "raid_level": "raid1", 00:12:52.197 "superblock": true, 00:12:52.197 "num_base_bdevs": 2, 00:12:52.197 "num_base_bdevs_discovered": 2, 00:12:52.197 "num_base_bdevs_operational": 2, 00:12:52.197 "base_bdevs_list": [ 00:12:52.197 { 00:12:52.197 "name": "BaseBdev1", 00:12:52.197 "uuid": "7260825f-a865-558f-bfd1-331c35ab79e3", 00:12:52.197 "is_configured": true, 00:12:52.197 "data_offset": 2048, 00:12:52.197 "data_size": 63488 00:12:52.197 }, 00:12:52.197 { 00:12:52.197 "name": "BaseBdev2", 00:12:52.197 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:52.197 "is_configured": true, 00:12:52.197 "data_offset": 2048, 00:12:52.197 "data_size": 63488 00:12:52.197 } 00:12:52.197 ] 00:12:52.197 }' 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.197 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.462 [2024-11-10 15:21:58.676725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.462 [2024-11-10 15:21:58.776399] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.462 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.727 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.727 "name": "raid_bdev1", 00:12:52.727 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:52.727 "strip_size_kb": 0, 00:12:52.727 "state": "online", 00:12:52.727 "raid_level": "raid1", 00:12:52.727 "superblock": true, 00:12:52.727 "num_base_bdevs": 2, 00:12:52.727 "num_base_bdevs_discovered": 1, 00:12:52.727 "num_base_bdevs_operational": 1, 00:12:52.727 "base_bdevs_list": [ 00:12:52.727 { 00:12:52.727 "name": null, 00:12:52.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.727 "is_configured": false, 00:12:52.727 "data_offset": 0, 00:12:52.727 "data_size": 63488 00:12:52.727 }, 00:12:52.727 { 00:12:52.727 "name": "BaseBdev2", 00:12:52.727 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:52.727 "is_configured": true, 00:12:52.727 "data_offset": 2048, 00:12:52.727 "data_size": 63488 00:12:52.727 } 00:12:52.727 ] 00:12:52.727 }' 00:12:52.727 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.727 15:21:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.727 [2024-11-10 15:21:58.870523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:12:52.727 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:52.727 Zero copy mechanism will not be used. 00:12:52.727 Running I/O for 60 seconds... 00:12:52.988 15:21:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.988 15:21:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.988 15:21:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.988 [2024-11-10 15:21:59.176855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.988 15:21:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.988 15:21:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:52.988 [2024-11-10 15:21:59.231235] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:12:52.988 [2024-11-10 15:21:59.233449] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:52.988 [2024-11-10 15:21:59.347404] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:52.988 [2024-11-10 15:21:59.348129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:53.248 [2024-11-10 15:21:59.564717] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.248 [2024-11-10 15:21:59.565202] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:53.508 [2024-11-10 15:21:59.812463] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:53.508 [2024-11-10 15:21:59.813158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:53.767 194.00 IOPS, 582.00 MiB/s [2024-11-10T15:22:00.130Z] [2024-11-10 15:22:00.022651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.028 "name": "raid_bdev1", 00:12:54.028 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:54.028 "strip_size_kb": 0, 00:12:54.028 "state": "online", 00:12:54.028 "raid_level": "raid1", 00:12:54.028 "superblock": true, 00:12:54.028 "num_base_bdevs": 2, 00:12:54.028 "num_base_bdevs_discovered": 2, 00:12:54.028 "num_base_bdevs_operational": 2, 00:12:54.028 "process": { 00:12:54.028 "type": "rebuild", 00:12:54.028 "target": "spare", 00:12:54.028 "progress": { 00:12:54.028 "blocks": 12288, 00:12:54.028 "percent": 19 00:12:54.028 } 00:12:54.028 }, 00:12:54.028 "base_bdevs_list": [ 00:12:54.028 { 00:12:54.028 "name": "spare", 00:12:54.028 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:54.028 "is_configured": true, 00:12:54.028 "data_offset": 2048, 00:12:54.028 "data_size": 63488 00:12:54.028 }, 00:12:54.028 { 00:12:54.028 "name": "BaseBdev2", 00:12:54.028 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:54.028 "is_configured": true, 00:12:54.028 "data_offset": 2048, 00:12:54.028 "data_size": 63488 00:12:54.028 } 00:12:54.028 ] 00:12:54.028 }' 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.028 [2024-11-10 15:22:00.342663] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.028 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.028 [2024-11-10 15:22:00.373363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.288 [2024-11-10 15:22:00.461260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.288 [2024-11-10 15:22:00.461590] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:54.288 [2024-11-10 15:22:00.563119] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.288 [2024-11-10 15:22:00.571141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.288 [2024-11-10 15:22:00.571186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.288 [2024-11-10 15:22:00.571205] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.288 [2024-11-10 15:22:00.593804] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006490 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.288 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.289 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.289 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.289 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.289 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.289 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.289 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.548 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.548 "name": "raid_bdev1", 00:12:54.548 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:54.548 "strip_size_kb": 0, 00:12:54.548 "state": "online", 00:12:54.548 "raid_level": "raid1", 00:12:54.548 "superblock": true, 00:12:54.548 "num_base_bdevs": 2, 00:12:54.548 "num_base_bdevs_discovered": 1, 00:12:54.548 "num_base_bdevs_operational": 1, 00:12:54.549 "base_bdevs_list": [ 00:12:54.549 { 00:12:54.549 "name": null, 00:12:54.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.549 "is_configured": false, 00:12:54.549 "data_offset": 0, 00:12:54.549 "data_size": 63488 00:12:54.549 }, 00:12:54.549 { 00:12:54.549 "name": "BaseBdev2", 00:12:54.549 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:54.549 "is_configured": true, 00:12:54.549 "data_offset": 2048, 00:12:54.549 "data_size": 63488 00:12:54.549 } 00:12:54.549 ] 00:12:54.549 }' 00:12:54.549 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.549 15:22:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.808 166.00 IOPS, 498.00 MiB/s [2024-11-10T15:22:01.171Z] 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.808 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.808 "name": "raid_bdev1", 00:12:54.808 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:54.808 "strip_size_kb": 0, 00:12:54.808 "state": "online", 00:12:54.808 "raid_level": "raid1", 00:12:54.808 "superblock": true, 00:12:54.808 "num_base_bdevs": 2, 00:12:54.808 "num_base_bdevs_discovered": 1, 00:12:54.808 "num_base_bdevs_operational": 1, 00:12:54.808 "base_bdevs_list": [ 00:12:54.808 { 00:12:54.808 "name": null, 00:12:54.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.808 "is_configured": false, 00:12:54.808 "data_offset": 0, 00:12:54.808 "data_size": 63488 00:12:54.808 }, 00:12:54.808 { 00:12:54.808 "name": "BaseBdev2", 00:12:54.808 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:54.808 "is_configured": true, 00:12:54.809 "data_offset": 2048, 00:12:54.809 "data_size": 63488 00:12:54.809 } 00:12:54.809 ] 00:12:54.809 }' 00:12:54.809 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.068 [2024-11-10 15:22:01.233612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.068 15:22:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:55.068 [2024-11-10 15:22:01.270442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:12:55.068 [2024-11-10 15:22:01.272295] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.068 [2024-11-10 15:22:01.374639] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.068 [2024-11-10 15:22:01.375073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:55.327 [2024-11-10 15:22:01.610691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.327 [2024-11-10 15:22:01.610965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:55.586 169.67 IOPS, 509.00 MiB/s [2024-11-10T15:22:01.949Z] [2024-11-10 15:22:01.945427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.586 [2024-11-10 15:22:01.945838] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:55.847 [2024-11-10 15:22:02.164873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.110 "name": "raid_bdev1", 00:12:56.110 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:56.110 "strip_size_kb": 0, 00:12:56.110 "state": "online", 00:12:56.110 "raid_level": "raid1", 00:12:56.110 "superblock": true, 00:12:56.110 "num_base_bdevs": 2, 00:12:56.110 "num_base_bdevs_discovered": 2, 00:12:56.110 "num_base_bdevs_operational": 2, 00:12:56.110 "process": { 00:12:56.110 "type": "rebuild", 00:12:56.110 "target": "spare", 00:12:56.110 "progress": { 00:12:56.110 "blocks": 12288, 00:12:56.110 "percent": 19 00:12:56.110 } 00:12:56.110 }, 00:12:56.110 "base_bdevs_list": [ 00:12:56.110 { 00:12:56.110 "name": "spare", 00:12:56.110 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:56.110 "is_configured": true, 00:12:56.110 "data_offset": 2048, 00:12:56.110 "data_size": 63488 00:12:56.110 }, 00:12:56.110 { 00:12:56.110 "name": "BaseBdev2", 00:12:56.110 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:56.110 "is_configured": true, 00:12:56.110 "data_offset": 2048, 00:12:56.110 "data_size": 63488 00:12:56.110 } 00:12:56.110 ] 00:12:56.110 }' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:56.110 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=336 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.110 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.110 "name": "raid_bdev1", 00:12:56.110 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:56.110 "strip_size_kb": 0, 00:12:56.110 "state": "online", 00:12:56.110 "raid_level": "raid1", 00:12:56.110 "superblock": true, 00:12:56.110 "num_base_bdevs": 2, 00:12:56.110 "num_base_bdevs_discovered": 2, 00:12:56.110 "num_base_bdevs_operational": 2, 00:12:56.110 "process": { 00:12:56.110 "type": "rebuild", 00:12:56.110 "target": "spare", 00:12:56.110 "progress": { 00:12:56.110 "blocks": 14336, 00:12:56.110 "percent": 22 00:12:56.110 } 00:12:56.110 }, 00:12:56.110 "base_bdevs_list": [ 00:12:56.110 { 00:12:56.110 "name": "spare", 00:12:56.110 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:56.111 "is_configured": true, 00:12:56.111 "data_offset": 2048, 00:12:56.111 "data_size": 63488 00:12:56.111 }, 00:12:56.111 { 00:12:56.111 "name": "BaseBdev2", 00:12:56.111 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:56.111 "is_configured": true, 00:12:56.111 "data_offset": 2048, 00:12:56.111 "data_size": 63488 00:12:56.111 } 00:12:56.111 ] 00:12:56.111 }' 00:12:56.111 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.371 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.371 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.371 [2024-11-10 15:22:02.512266] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:56.371 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.371 15:22:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:56.371 [2024-11-10 15:22:02.727184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:56.631 [2024-11-10 15:22:02.841738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:56.631 [2024-11-10 15:22:02.842128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:56.890 147.75 IOPS, 443.25 MiB/s [2024-11-10T15:22:03.253Z] [2024-11-10 15:22:03.205088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.461 "name": "raid_bdev1", 00:12:57.461 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:57.461 "strip_size_kb": 0, 00:12:57.461 "state": "online", 00:12:57.461 "raid_level": "raid1", 00:12:57.461 "superblock": true, 00:12:57.461 "num_base_bdevs": 2, 00:12:57.461 "num_base_bdevs_discovered": 2, 00:12:57.461 "num_base_bdevs_operational": 2, 00:12:57.461 "process": { 00:12:57.461 "type": "rebuild", 00:12:57.461 "target": "spare", 00:12:57.461 "progress": { 00:12:57.461 "blocks": 32768, 00:12:57.461 "percent": 51 00:12:57.461 } 00:12:57.461 }, 00:12:57.461 "base_bdevs_list": [ 00:12:57.461 { 00:12:57.461 "name": "spare", 00:12:57.461 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:57.461 "is_configured": true, 00:12:57.461 "data_offset": 2048, 00:12:57.461 "data_size": 63488 00:12:57.461 }, 00:12:57.461 { 00:12:57.461 "name": "BaseBdev2", 00:12:57.461 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:57.461 "is_configured": true, 00:12:57.461 "data_offset": 2048, 00:12:57.461 "data_size": 63488 00:12:57.461 } 00:12:57.461 ] 00:12:57.461 }' 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.461 [2024-11-10 15:22:03.651725] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:57.461 [2024-11-10 15:22:03.652002] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:57.461 15:22:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:57.980 131.40 IOPS, 394.20 MiB/s [2024-11-10T15:22:04.343Z] [2024-11-10 15:22:04.106720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.551 "name": "raid_bdev1", 00:12:58.551 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:58.551 "strip_size_kb": 0, 00:12:58.551 "state": "online", 00:12:58.551 "raid_level": "raid1", 00:12:58.551 "superblock": true, 00:12:58.551 "num_base_bdevs": 2, 00:12:58.551 "num_base_bdevs_discovered": 2, 00:12:58.551 "num_base_bdevs_operational": 2, 00:12:58.551 "process": { 00:12:58.551 "type": "rebuild", 00:12:58.551 "target": "spare", 00:12:58.551 "progress": { 00:12:58.551 "blocks": 51200, 00:12:58.551 "percent": 80 00:12:58.551 } 00:12:58.551 }, 00:12:58.551 "base_bdevs_list": [ 00:12:58.551 { 00:12:58.551 "name": "spare", 00:12:58.551 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:58.551 "is_configured": true, 00:12:58.551 "data_offset": 2048, 00:12:58.551 "data_size": 63488 00:12:58.551 }, 00:12:58.551 { 00:12:58.551 "name": "BaseBdev2", 00:12:58.551 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:58.551 "is_configured": true, 00:12:58.551 "data_offset": 2048, 00:12:58.551 "data_size": 63488 00:12:58.551 } 00:12:58.551 ] 00:12:58.551 }' 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.551 [2024-11-10 15:22:04.750523] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.551 15:22:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:59.121 116.33 IOPS, 349.00 MiB/s [2024-11-10T15:22:05.484Z] [2024-11-10 15:22:05.176284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:59.381 [2024-11-10 15:22:05.498187] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:59.381 [2024-11-10 15:22:05.598138] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:59.381 [2024-11-10 15:22:05.600299] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.641 "name": "raid_bdev1", 00:12:59.641 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:59.641 "strip_size_kb": 0, 00:12:59.641 "state": "online", 00:12:59.641 "raid_level": "raid1", 00:12:59.641 "superblock": true, 00:12:59.641 "num_base_bdevs": 2, 00:12:59.641 "num_base_bdevs_discovered": 2, 00:12:59.641 "num_base_bdevs_operational": 2, 00:12:59.641 "base_bdevs_list": [ 00:12:59.641 { 00:12:59.641 "name": "spare", 00:12:59.641 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:59.641 "is_configured": true, 00:12:59.641 "data_offset": 2048, 00:12:59.641 "data_size": 63488 00:12:59.641 }, 00:12:59.641 { 00:12:59.641 "name": "BaseBdev2", 00:12:59.641 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:59.641 "is_configured": true, 00:12:59.641 "data_offset": 2048, 00:12:59.641 "data_size": 63488 00:12:59.641 } 00:12:59.641 ] 00:12:59.641 }' 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.641 104.29 IOPS, 312.86 MiB/s [2024-11-10T15:22:06.004Z] 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.641 15:22:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:59.901 "name": "raid_bdev1", 00:12:59.901 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:59.901 "strip_size_kb": 0, 00:12:59.901 "state": "online", 00:12:59.901 "raid_level": "raid1", 00:12:59.901 "superblock": true, 00:12:59.901 "num_base_bdevs": 2, 00:12:59.901 "num_base_bdevs_discovered": 2, 00:12:59.901 "num_base_bdevs_operational": 2, 00:12:59.901 "base_bdevs_list": [ 00:12:59.901 { 00:12:59.901 "name": "spare", 00:12:59.901 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:59.901 "is_configured": true, 00:12:59.901 "data_offset": 2048, 00:12:59.901 "data_size": 63488 00:12:59.901 }, 00:12:59.901 { 00:12:59.901 "name": "BaseBdev2", 00:12:59.901 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:59.901 "is_configured": true, 00:12:59.901 "data_offset": 2048, 00:12:59.901 "data_size": 63488 00:12:59.901 } 00:12:59.901 ] 00:12:59.901 }' 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:59.901 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.902 "name": "raid_bdev1", 00:12:59.902 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:12:59.902 "strip_size_kb": 0, 00:12:59.902 "state": "online", 00:12:59.902 "raid_level": "raid1", 00:12:59.902 "superblock": true, 00:12:59.902 "num_base_bdevs": 2, 00:12:59.902 "num_base_bdevs_discovered": 2, 00:12:59.902 "num_base_bdevs_operational": 2, 00:12:59.902 "base_bdevs_list": [ 00:12:59.902 { 00:12:59.902 "name": "spare", 00:12:59.902 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:12:59.902 "is_configured": true, 00:12:59.902 "data_offset": 2048, 00:12:59.902 "data_size": 63488 00:12:59.902 }, 00:12:59.902 { 00:12:59.902 "name": "BaseBdev2", 00:12:59.902 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:12:59.902 "is_configured": true, 00:12:59.902 "data_offset": 2048, 00:12:59.902 "data_size": 63488 00:12:59.902 } 00:12:59.902 ] 00:12:59.902 }' 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.902 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.472 [2024-11-10 15:22:06.570164] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.472 [2024-11-10 15:22:06.570305] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.472 00:13:00.472 Latency(us) 00:13:00.472 [2024-11-10T15:22:06.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.472 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:00.472 raid_bdev1 : 7.80 96.71 290.12 0.00 0.00 14444.06 264.19 111959.00 00:13:00.472 [2024-11-10T15:22:06.835Z] =================================================================================================================== 00:13:00.472 [2024-11-10T15:22:06.835Z] Total : 96.71 290.12 0.00 0.00 14444.06 264.19 111959.00 00:13:00.472 [2024-11-10 15:22:06.675125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.472 [2024-11-10 15:22:06.675229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.472 [2024-11-10 15:22:06.675344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.472 [2024-11-10 15:22:06.675400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:00.472 { 00:13:00.472 "results": [ 00:13:00.472 { 00:13:00.472 "job": "raid_bdev1", 00:13:00.472 "core_mask": "0x1", 00:13:00.472 "workload": "randrw", 00:13:00.472 "percentage": 50, 00:13:00.472 "status": "finished", 00:13:00.472 "queue_depth": 2, 00:13:00.472 "io_size": 3145728, 00:13:00.472 "runtime": 7.796764, 00:13:00.472 "iops": 96.70678758520843, 00:13:00.472 "mibps": 290.1203627556253, 00:13:00.472 "io_failed": 0, 00:13:00.472 "io_timeout": 0, 00:13:00.472 "avg_latency_us": 14444.063874103218, 00:13:00.472 "min_latency_us": 264.1889653970191, 00:13:00.472 "max_latency_us": 111958.99938987187 00:13:00.472 } 00:13:00.472 ], 00:13:00.472 "core_count": 1 00:13:00.472 } 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:00.472 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.473 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:00.733 /dev/nbd0 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.733 1+0 records in 00:13:00.733 1+0 records out 00:13:00.733 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532067 s, 7.7 MB/s 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:00.733 15:22:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:00.733 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:00.733 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:00.733 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:00.733 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.733 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:00.993 /dev/nbd1 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:00.993 1+0 records in 00:13:00.993 1+0 records out 00:13:00.993 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525707 s, 7.8 MB/s 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:00.993 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:00.994 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:00.994 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:00.994 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:00.994 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:00.994 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:00.994 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:01.260 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:01.261 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:01.261 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:01.261 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.533 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.533 [2024-11-10 15:22:07.802250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:01.533 [2024-11-10 15:22:07.802374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.534 [2024-11-10 15:22:07.802399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:01.534 [2024-11-10 15:22:07.802411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.534 [2024-11-10 15:22:07.804808] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.534 [2024-11-10 15:22:07.804893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:01.534 [2024-11-10 15:22:07.804977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:01.534 [2024-11-10 15:22:07.805039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.534 [2024-11-10 15:22:07.805159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.534 spare 00:13:01.534 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.534 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:01.534 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.534 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.794 [2024-11-10 15:22:07.905228] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:01.794 [2024-11-10 15:22:07.905263] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.794 [2024-11-10 15:22:07.905549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b4e0 00:13:01.794 [2024-11-10 15:22:07.905685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:01.794 [2024-11-10 15:22:07.905703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:01.794 [2024-11-10 15:22:07.905827] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.794 "name": "raid_bdev1", 00:13:01.794 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:01.794 "strip_size_kb": 0, 00:13:01.794 "state": "online", 00:13:01.794 "raid_level": "raid1", 00:13:01.794 "superblock": true, 00:13:01.794 "num_base_bdevs": 2, 00:13:01.794 "num_base_bdevs_discovered": 2, 00:13:01.794 "num_base_bdevs_operational": 2, 00:13:01.794 "base_bdevs_list": [ 00:13:01.794 { 00:13:01.794 "name": "spare", 00:13:01.794 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:13:01.794 "is_configured": true, 00:13:01.794 "data_offset": 2048, 00:13:01.794 "data_size": 63488 00:13:01.794 }, 00:13:01.794 { 00:13:01.794 "name": "BaseBdev2", 00:13:01.794 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:01.794 "is_configured": true, 00:13:01.794 "data_offset": 2048, 00:13:01.794 "data_size": 63488 00:13:01.794 } 00:13:01.794 ] 00:13:01.794 }' 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.794 15:22:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.054 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.314 "name": "raid_bdev1", 00:13:02.314 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:02.314 "strip_size_kb": 0, 00:13:02.314 "state": "online", 00:13:02.314 "raid_level": "raid1", 00:13:02.314 "superblock": true, 00:13:02.314 "num_base_bdevs": 2, 00:13:02.314 "num_base_bdevs_discovered": 2, 00:13:02.314 "num_base_bdevs_operational": 2, 00:13:02.314 "base_bdevs_list": [ 00:13:02.314 { 00:13:02.314 "name": "spare", 00:13:02.314 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:13:02.314 "is_configured": true, 00:13:02.314 "data_offset": 2048, 00:13:02.314 "data_size": 63488 00:13:02.314 }, 00:13:02.314 { 00:13:02.314 "name": "BaseBdev2", 00:13:02.314 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:02.314 "is_configured": true, 00:13:02.314 "data_offset": 2048, 00:13:02.314 "data_size": 63488 00:13:02.314 } 00:13:02.314 ] 00:13:02.314 }' 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.314 [2024-11-10 15:22:08.578600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.314 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.315 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.315 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.315 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.315 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.315 "name": "raid_bdev1", 00:13:02.315 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:02.315 "strip_size_kb": 0, 00:13:02.315 "state": "online", 00:13:02.315 "raid_level": "raid1", 00:13:02.315 "superblock": true, 00:13:02.315 "num_base_bdevs": 2, 00:13:02.315 "num_base_bdevs_discovered": 1, 00:13:02.315 "num_base_bdevs_operational": 1, 00:13:02.315 "base_bdevs_list": [ 00:13:02.315 { 00:13:02.315 "name": null, 00:13:02.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.315 "is_configured": false, 00:13:02.315 "data_offset": 0, 00:13:02.315 "data_size": 63488 00:13:02.315 }, 00:13:02.315 { 00:13:02.315 "name": "BaseBdev2", 00:13:02.315 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:02.315 "is_configured": true, 00:13:02.315 "data_offset": 2048, 00:13:02.315 "data_size": 63488 00:13:02.315 } 00:13:02.315 ] 00:13:02.315 }' 00:13:02.315 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.315 15:22:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:02.884 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.884 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 [2024-11-10 15:22:09.034761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.884 [2024-11-10 15:22:09.035053] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:02.884 [2024-11-10 15:22:09.035115] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:02.884 [2024-11-10 15:22:09.035188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:02.884 [2024-11-10 15:22:09.044596] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b5b0 00:13:02.884 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.884 15:22:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:02.884 [2024-11-10 15:22:09.046748] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.824 "name": "raid_bdev1", 00:13:03.824 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:03.824 "strip_size_kb": 0, 00:13:03.824 "state": "online", 00:13:03.824 "raid_level": "raid1", 00:13:03.824 "superblock": true, 00:13:03.824 "num_base_bdevs": 2, 00:13:03.824 "num_base_bdevs_discovered": 2, 00:13:03.824 "num_base_bdevs_operational": 2, 00:13:03.824 "process": { 00:13:03.824 "type": "rebuild", 00:13:03.824 "target": "spare", 00:13:03.824 "progress": { 00:13:03.824 "blocks": 20480, 00:13:03.824 "percent": 32 00:13:03.824 } 00:13:03.824 }, 00:13:03.824 "base_bdevs_list": [ 00:13:03.824 { 00:13:03.824 "name": "spare", 00:13:03.824 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:13:03.824 "is_configured": true, 00:13:03.824 "data_offset": 2048, 00:13:03.824 "data_size": 63488 00:13:03.824 }, 00:13:03.824 { 00:13:03.824 "name": "BaseBdev2", 00:13:03.824 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:03.824 "is_configured": true, 00:13:03.824 "data_offset": 2048, 00:13:03.824 "data_size": 63488 00:13:03.824 } 00:13:03.824 ] 00:13:03.824 }' 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.824 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.084 [2024-11-10 15:22:10.211168] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.084 [2024-11-10 15:22:10.256600] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:04.084 [2024-11-10 15:22:10.256709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.084 [2024-11-10 15:22:10.256747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:04.084 [2024-11-10 15:22:10.256767] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.084 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.084 "name": "raid_bdev1", 00:13:04.084 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:04.084 "strip_size_kb": 0, 00:13:04.084 "state": "online", 00:13:04.084 "raid_level": "raid1", 00:13:04.084 "superblock": true, 00:13:04.084 "num_base_bdevs": 2, 00:13:04.084 "num_base_bdevs_discovered": 1, 00:13:04.084 "num_base_bdevs_operational": 1, 00:13:04.084 "base_bdevs_list": [ 00:13:04.084 { 00:13:04.084 "name": null, 00:13:04.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.084 "is_configured": false, 00:13:04.084 "data_offset": 0, 00:13:04.084 "data_size": 63488 00:13:04.084 }, 00:13:04.084 { 00:13:04.084 "name": "BaseBdev2", 00:13:04.084 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:04.084 "is_configured": true, 00:13:04.084 "data_offset": 2048, 00:13:04.084 "data_size": 63488 00:13:04.084 } 00:13:04.085 ] 00:13:04.085 }' 00:13:04.085 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.085 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.654 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:04.654 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.654 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:04.654 [2024-11-10 15:22:10.768641] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:04.654 [2024-11-10 15:22:10.768761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.654 [2024-11-10 15:22:10.768801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:04.654 [2024-11-10 15:22:10.768829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.654 [2024-11-10 15:22:10.769327] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.654 [2024-11-10 15:22:10.769387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:04.654 [2024-11-10 15:22:10.769496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:04.654 [2024-11-10 15:22:10.769535] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:04.654 [2024-11-10 15:22:10.769573] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:04.654 [2024-11-10 15:22:10.769632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.655 [2024-11-10 15:22:10.777175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:13:04.655 spare 00:13:04.655 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.655 15:22:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:04.655 [2024-11-10 15:22:10.779294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.592 "name": "raid_bdev1", 00:13:05.592 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:05.592 "strip_size_kb": 0, 00:13:05.592 "state": "online", 00:13:05.592 "raid_level": "raid1", 00:13:05.592 "superblock": true, 00:13:05.592 "num_base_bdevs": 2, 00:13:05.592 "num_base_bdevs_discovered": 2, 00:13:05.592 "num_base_bdevs_operational": 2, 00:13:05.592 "process": { 00:13:05.592 "type": "rebuild", 00:13:05.592 "target": "spare", 00:13:05.592 "progress": { 00:13:05.592 "blocks": 20480, 00:13:05.592 "percent": 32 00:13:05.592 } 00:13:05.592 }, 00:13:05.592 "base_bdevs_list": [ 00:13:05.592 { 00:13:05.592 "name": "spare", 00:13:05.592 "uuid": "9663b43c-4bb8-530d-aed7-2cc412df8627", 00:13:05.592 "is_configured": true, 00:13:05.592 "data_offset": 2048, 00:13:05.592 "data_size": 63488 00:13:05.592 }, 00:13:05.592 { 00:13:05.592 "name": "BaseBdev2", 00:13:05.592 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:05.592 "is_configured": true, 00:13:05.592 "data_offset": 2048, 00:13:05.592 "data_size": 63488 00:13:05.592 } 00:13:05.592 ] 00:13:05.592 }' 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.592 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.593 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.593 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.593 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:05.593 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.593 15:22:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.593 [2024-11-10 15:22:11.945204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.915 [2024-11-10 15:22:11.989132] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.915 [2024-11-10 15:22:11.989192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.915 [2024-11-10 15:22:11.989206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.915 [2024-11-10 15:22:11.989216] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:05.915 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.915 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.915 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.916 "name": "raid_bdev1", 00:13:05.916 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:05.916 "strip_size_kb": 0, 00:13:05.916 "state": "online", 00:13:05.916 "raid_level": "raid1", 00:13:05.916 "superblock": true, 00:13:05.916 "num_base_bdevs": 2, 00:13:05.916 "num_base_bdevs_discovered": 1, 00:13:05.916 "num_base_bdevs_operational": 1, 00:13:05.916 "base_bdevs_list": [ 00:13:05.916 { 00:13:05.916 "name": null, 00:13:05.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.916 "is_configured": false, 00:13:05.916 "data_offset": 0, 00:13:05.916 "data_size": 63488 00:13:05.916 }, 00:13:05.916 { 00:13:05.916 "name": "BaseBdev2", 00:13:05.916 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:05.916 "is_configured": true, 00:13:05.916 "data_offset": 2048, 00:13:05.916 "data_size": 63488 00:13:05.916 } 00:13:05.916 ] 00:13:05.916 }' 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.916 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.174 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.174 "name": "raid_bdev1", 00:13:06.174 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:06.174 "strip_size_kb": 0, 00:13:06.175 "state": "online", 00:13:06.175 "raid_level": "raid1", 00:13:06.175 "superblock": true, 00:13:06.175 "num_base_bdevs": 2, 00:13:06.175 "num_base_bdevs_discovered": 1, 00:13:06.175 "num_base_bdevs_operational": 1, 00:13:06.175 "base_bdevs_list": [ 00:13:06.175 { 00:13:06.175 "name": null, 00:13:06.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.175 "is_configured": false, 00:13:06.175 "data_offset": 0, 00:13:06.175 "data_size": 63488 00:13:06.175 }, 00:13:06.175 { 00:13:06.175 "name": "BaseBdev2", 00:13:06.175 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:06.175 "is_configured": true, 00:13:06.175 "data_offset": 2048, 00:13:06.175 "data_size": 63488 00:13:06.175 } 00:13:06.175 ] 00:13:06.175 }' 00:13:06.175 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.434 [2024-11-10 15:22:12.612986] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.434 [2024-11-10 15:22:12.613127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.434 [2024-11-10 15:22:12.613151] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:06.434 [2024-11-10 15:22:12.613164] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.434 [2024-11-10 15:22:12.613610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.434 [2024-11-10 15:22:12.613631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.434 [2024-11-10 15:22:12.613701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:06.434 [2024-11-10 15:22:12.613720] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:06.434 [2024-11-10 15:22:12.613741] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:06.434 [2024-11-10 15:22:12.613766] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:06.434 BaseBdev1 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.434 15:22:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.374 "name": "raid_bdev1", 00:13:07.374 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:07.374 "strip_size_kb": 0, 00:13:07.374 "state": "online", 00:13:07.374 "raid_level": "raid1", 00:13:07.374 "superblock": true, 00:13:07.374 "num_base_bdevs": 2, 00:13:07.374 "num_base_bdevs_discovered": 1, 00:13:07.374 "num_base_bdevs_operational": 1, 00:13:07.374 "base_bdevs_list": [ 00:13:07.374 { 00:13:07.374 "name": null, 00:13:07.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.374 "is_configured": false, 00:13:07.374 "data_offset": 0, 00:13:07.374 "data_size": 63488 00:13:07.374 }, 00:13:07.374 { 00:13:07.374 "name": "BaseBdev2", 00:13:07.374 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:07.374 "is_configured": true, 00:13:07.374 "data_offset": 2048, 00:13:07.374 "data_size": 63488 00:13:07.374 } 00:13:07.374 ] 00:13:07.374 }' 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.374 15:22:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.945 "name": "raid_bdev1", 00:13:07.945 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:07.945 "strip_size_kb": 0, 00:13:07.945 "state": "online", 00:13:07.945 "raid_level": "raid1", 00:13:07.945 "superblock": true, 00:13:07.945 "num_base_bdevs": 2, 00:13:07.945 "num_base_bdevs_discovered": 1, 00:13:07.945 "num_base_bdevs_operational": 1, 00:13:07.945 "base_bdevs_list": [ 00:13:07.945 { 00:13:07.945 "name": null, 00:13:07.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.945 "is_configured": false, 00:13:07.945 "data_offset": 0, 00:13:07.945 "data_size": 63488 00:13:07.945 }, 00:13:07.945 { 00:13:07.945 "name": "BaseBdev2", 00:13:07.945 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:07.945 "is_configured": true, 00:13:07.945 "data_offset": 2048, 00:13:07.945 "data_size": 63488 00:13:07.945 } 00:13:07.945 ] 00:13:07.945 }' 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.945 [2024-11-10 15:22:14.217585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.945 [2024-11-10 15:22:14.217753] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:07.945 [2024-11-10 15:22:14.217813] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:07.945 request: 00:13:07.945 { 00:13:07.945 "base_bdev": "BaseBdev1", 00:13:07.945 "raid_bdev": "raid_bdev1", 00:13:07.945 "method": "bdev_raid_add_base_bdev", 00:13:07.945 "req_id": 1 00:13:07.945 } 00:13:07.945 Got JSON-RPC error response 00:13:07.945 response: 00:13:07.945 { 00:13:07.945 "code": -22, 00:13:07.945 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:07.945 } 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:07.945 15:22:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.885 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.145 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.145 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.145 "name": "raid_bdev1", 00:13:09.145 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:09.145 "strip_size_kb": 0, 00:13:09.145 "state": "online", 00:13:09.145 "raid_level": "raid1", 00:13:09.145 "superblock": true, 00:13:09.145 "num_base_bdevs": 2, 00:13:09.145 "num_base_bdevs_discovered": 1, 00:13:09.145 "num_base_bdevs_operational": 1, 00:13:09.145 "base_bdevs_list": [ 00:13:09.145 { 00:13:09.145 "name": null, 00:13:09.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.146 "is_configured": false, 00:13:09.146 "data_offset": 0, 00:13:09.146 "data_size": 63488 00:13:09.146 }, 00:13:09.146 { 00:13:09.146 "name": "BaseBdev2", 00:13:09.146 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:09.146 "is_configured": true, 00:13:09.146 "data_offset": 2048, 00:13:09.146 "data_size": 63488 00:13:09.146 } 00:13:09.146 ] 00:13:09.146 }' 00:13:09.146 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.146 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.406 "name": "raid_bdev1", 00:13:09.406 "uuid": "fdcc343a-5e80-4163-a372-0f0ea3a03f78", 00:13:09.406 "strip_size_kb": 0, 00:13:09.406 "state": "online", 00:13:09.406 "raid_level": "raid1", 00:13:09.406 "superblock": true, 00:13:09.406 "num_base_bdevs": 2, 00:13:09.406 "num_base_bdevs_discovered": 1, 00:13:09.406 "num_base_bdevs_operational": 1, 00:13:09.406 "base_bdevs_list": [ 00:13:09.406 { 00:13:09.406 "name": null, 00:13:09.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.406 "is_configured": false, 00:13:09.406 "data_offset": 0, 00:13:09.406 "data_size": 63488 00:13:09.406 }, 00:13:09.406 { 00:13:09.406 "name": "BaseBdev2", 00:13:09.406 "uuid": "0aec8aee-903e-5acd-8f5f-727bc55c809d", 00:13:09.406 "is_configured": true, 00:13:09.406 "data_offset": 2048, 00:13:09.406 "data_size": 63488 00:13:09.406 } 00:13:09.406 ] 00:13:09.406 }' 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:09.406 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 88895 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 88895 ']' 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 88895 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 88895 00:13:09.666 killing process with pid 88895 00:13:09.666 Received shutdown signal, test time was about 16.964183 seconds 00:13:09.666 00:13:09.666 Latency(us) 00:13:09.666 [2024-11-10T15:22:16.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.666 [2024-11-10T15:22:16.029Z] =================================================================================================================== 00:13:09.666 [2024-11-10T15:22:16.029Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 88895' 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 88895 00:13:09.666 [2024-11-10 15:22:15.838180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:09.666 [2024-11-10 15:22:15.838282] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:09.666 [2024-11-10 15:22:15.838337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:09.666 15:22:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 88895 00:13:09.666 [2024-11-10 15:22:15.838348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:09.666 [2024-11-10 15:22:15.886363] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.926 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:09.926 00:13:09.926 real 0m18.935s 00:13:09.926 user 0m25.111s 00:13:09.926 sys 0m2.187s 00:13:09.926 ************************************ 00:13:09.926 END TEST raid_rebuild_test_sb_io 00:13:09.926 ************************************ 00:13:09.926 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:09.926 15:22:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.926 15:22:16 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:09.926 15:22:16 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:09.926 15:22:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:09.926 15:22:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:09.926 15:22:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:10.187 ************************************ 00:13:10.187 START TEST raid_rebuild_test 00:13:10.187 ************************************ 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false false true 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=89574 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 89574 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 89574 ']' 00:13:10.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:10.187 15:22:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.187 [2024-11-10 15:22:16.391847] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:13:10.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:10.187 Zero copy mechanism will not be used. 00:13:10.187 [2024-11-10 15:22:16.392106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89574 ] 00:13:10.187 [2024-11-10 15:22:16.526601] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:10.447 [2024-11-10 15:22:16.564220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.447 [2024-11-10 15:22:16.602873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.447 [2024-11-10 15:22:16.678704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.447 [2024-11-10 15:22:16.678858] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 BaseBdev1_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 [2024-11-10 15:22:17.277961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:11.018 [2024-11-10 15:22:17.278057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.018 [2024-11-10 15:22:17.278087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:11.018 [2024-11-10 15:22:17.278114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.018 [2024-11-10 15:22:17.280537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.018 [2024-11-10 15:22:17.280580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:11.018 BaseBdev1 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 BaseBdev2_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 [2024-11-10 15:22:17.312665] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:11.018 [2024-11-10 15:22:17.312720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.018 [2024-11-10 15:22:17.312737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:11.018 [2024-11-10 15:22:17.312748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.018 [2024-11-10 15:22:17.315083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.018 [2024-11-10 15:22:17.315193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:11.018 BaseBdev2 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 BaseBdev3_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.018 [2024-11-10 15:22:17.347039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:11.018 [2024-11-10 15:22:17.347087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.018 [2024-11-10 15:22:17.347105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:11.018 [2024-11-10 15:22:17.347116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.018 [2024-11-10 15:22:17.349464] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.018 [2024-11-10 15:22:17.349503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:11.018 BaseBdev3 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.018 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.278 BaseBdev4_malloc 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.278 [2024-11-10 15:22:17.393058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:11.278 [2024-11-10 15:22:17.393110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.278 [2024-11-10 15:22:17.393130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:11.278 [2024-11-10 15:22:17.393141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.278 [2024-11-10 15:22:17.395510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.278 [2024-11-10 15:22:17.395548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:11.278 BaseBdev4 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.278 spare_malloc 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.278 spare_delay 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.278 [2024-11-10 15:22:17.439737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:11.278 [2024-11-10 15:22:17.439873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:11.278 [2024-11-10 15:22:17.439897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:11.278 [2024-11-10 15:22:17.439910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:11.278 [2024-11-10 15:22:17.442244] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:11.278 [2024-11-10 15:22:17.442279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:11.278 spare 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.278 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.279 [2024-11-10 15:22:17.451828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:11.279 [2024-11-10 15:22:17.453914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:11.279 [2024-11-10 15:22:17.453977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.279 [2024-11-10 15:22:17.454030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:11.279 [2024-11-10 15:22:17.454098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:11.279 [2024-11-10 15:22:17.454110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:11.279 [2024-11-10 15:22:17.454376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:11.279 [2024-11-10 15:22:17.454509] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:11.279 [2024-11-10 15:22:17.454518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:11.279 [2024-11-10 15:22:17.454629] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.279 "name": "raid_bdev1", 00:13:11.279 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:11.279 "strip_size_kb": 0, 00:13:11.279 "state": "online", 00:13:11.279 "raid_level": "raid1", 00:13:11.279 "superblock": false, 00:13:11.279 "num_base_bdevs": 4, 00:13:11.279 "num_base_bdevs_discovered": 4, 00:13:11.279 "num_base_bdevs_operational": 4, 00:13:11.279 "base_bdevs_list": [ 00:13:11.279 { 00:13:11.279 "name": "BaseBdev1", 00:13:11.279 "uuid": "0457c9c5-eb9a-5b5e-abc8-6529e7da53aa", 00:13:11.279 "is_configured": true, 00:13:11.279 "data_offset": 0, 00:13:11.279 "data_size": 65536 00:13:11.279 }, 00:13:11.279 { 00:13:11.279 "name": "BaseBdev2", 00:13:11.279 "uuid": "8c56fd8a-ef4e-5516-9a5b-4c77900ff298", 00:13:11.279 "is_configured": true, 00:13:11.279 "data_offset": 0, 00:13:11.279 "data_size": 65536 00:13:11.279 }, 00:13:11.279 { 00:13:11.279 "name": "BaseBdev3", 00:13:11.279 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:11.279 "is_configured": true, 00:13:11.279 "data_offset": 0, 00:13:11.279 "data_size": 65536 00:13:11.279 }, 00:13:11.279 { 00:13:11.279 "name": "BaseBdev4", 00:13:11.279 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:11.279 "is_configured": true, 00:13:11.279 "data_offset": 0, 00:13:11.279 "data_size": 65536 00:13:11.279 } 00:13:11.279 ] 00:13:11.279 }' 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.279 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.538 [2024-11-10 15:22:17.864127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.538 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.799 15:22:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:11.799 [2024-11-10 15:22:18.148022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:12.059 /dev/nbd0 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.059 1+0 records in 00:13:12.059 1+0 records out 00:13:12.059 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000634103 s, 6.5 MB/s 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:12.059 15:22:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:18.652 65536+0 records in 00:13:18.652 65536+0 records out 00:13:18.652 33554432 bytes (34 MB, 32 MiB) copied, 5.51944 s, 6.1 MB/s 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.652 [2024-11-10 15:22:23.972791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.652 [2024-11-10 15:22:23.988902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.652 15:22:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.652 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.652 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.652 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.652 15:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.652 "name": "raid_bdev1", 00:13:18.652 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:18.652 "strip_size_kb": 0, 00:13:18.652 "state": "online", 00:13:18.652 "raid_level": "raid1", 00:13:18.652 "superblock": false, 00:13:18.652 "num_base_bdevs": 4, 00:13:18.652 "num_base_bdevs_discovered": 3, 00:13:18.652 "num_base_bdevs_operational": 3, 00:13:18.652 "base_bdevs_list": [ 00:13:18.652 { 00:13:18.652 "name": null, 00:13:18.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.652 "is_configured": false, 00:13:18.652 "data_offset": 0, 00:13:18.653 "data_size": 65536 00:13:18.653 }, 00:13:18.653 { 00:13:18.653 "name": "BaseBdev2", 00:13:18.653 "uuid": "8c56fd8a-ef4e-5516-9a5b-4c77900ff298", 00:13:18.653 "is_configured": true, 00:13:18.653 "data_offset": 0, 00:13:18.653 "data_size": 65536 00:13:18.653 }, 00:13:18.653 { 00:13:18.653 "name": "BaseBdev3", 00:13:18.653 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:18.653 "is_configured": true, 00:13:18.653 "data_offset": 0, 00:13:18.653 "data_size": 65536 00:13:18.653 }, 00:13:18.653 { 00:13:18.653 "name": "BaseBdev4", 00:13:18.653 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:18.653 "is_configured": true, 00:13:18.653 "data_offset": 0, 00:13:18.653 "data_size": 65536 00:13:18.653 } 00:13:18.653 ] 00:13:18.653 }' 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.653 [2024-11-10 15:22:24.484999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.653 [2024-11-10 15:22:24.492209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a180 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.653 15:22:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:18.653 [2024-11-10 15:22:24.494375] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.223 "name": "raid_bdev1", 00:13:19.223 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:19.223 "strip_size_kb": 0, 00:13:19.223 "state": "online", 00:13:19.223 "raid_level": "raid1", 00:13:19.223 "superblock": false, 00:13:19.223 "num_base_bdevs": 4, 00:13:19.223 "num_base_bdevs_discovered": 4, 00:13:19.223 "num_base_bdevs_operational": 4, 00:13:19.223 "process": { 00:13:19.223 "type": "rebuild", 00:13:19.223 "target": "spare", 00:13:19.223 "progress": { 00:13:19.223 "blocks": 20480, 00:13:19.223 "percent": 31 00:13:19.223 } 00:13:19.223 }, 00:13:19.223 "base_bdevs_list": [ 00:13:19.223 { 00:13:19.223 "name": "spare", 00:13:19.223 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:19.223 "is_configured": true, 00:13:19.223 "data_offset": 0, 00:13:19.223 "data_size": 65536 00:13:19.223 }, 00:13:19.223 { 00:13:19.223 "name": "BaseBdev2", 00:13:19.223 "uuid": "8c56fd8a-ef4e-5516-9a5b-4c77900ff298", 00:13:19.223 "is_configured": true, 00:13:19.223 "data_offset": 0, 00:13:19.223 "data_size": 65536 00:13:19.223 }, 00:13:19.223 { 00:13:19.223 "name": "BaseBdev3", 00:13:19.223 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:19.223 "is_configured": true, 00:13:19.223 "data_offset": 0, 00:13:19.223 "data_size": 65536 00:13:19.223 }, 00:13:19.223 { 00:13:19.223 "name": "BaseBdev4", 00:13:19.223 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:19.223 "is_configured": true, 00:13:19.223 "data_offset": 0, 00:13:19.223 "data_size": 65536 00:13:19.223 } 00:13:19.223 ] 00:13:19.223 }' 00:13:19.223 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.483 [2024-11-10 15:22:25.652854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.483 [2024-11-10 15:22:25.704852] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:19.483 [2024-11-10 15:22:25.704927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.483 [2024-11-10 15:22:25.704947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:19.483 [2024-11-10 15:22:25.704965] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:19.483 "name": "raid_bdev1", 00:13:19.483 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:19.483 "strip_size_kb": 0, 00:13:19.483 "state": "online", 00:13:19.483 "raid_level": "raid1", 00:13:19.483 "superblock": false, 00:13:19.483 "num_base_bdevs": 4, 00:13:19.483 "num_base_bdevs_discovered": 3, 00:13:19.483 "num_base_bdevs_operational": 3, 00:13:19.483 "base_bdevs_list": [ 00:13:19.483 { 00:13:19.483 "name": null, 00:13:19.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.483 "is_configured": false, 00:13:19.483 "data_offset": 0, 00:13:19.483 "data_size": 65536 00:13:19.483 }, 00:13:19.483 { 00:13:19.483 "name": "BaseBdev2", 00:13:19.483 "uuid": "8c56fd8a-ef4e-5516-9a5b-4c77900ff298", 00:13:19.483 "is_configured": true, 00:13:19.483 "data_offset": 0, 00:13:19.483 "data_size": 65536 00:13:19.483 }, 00:13:19.483 { 00:13:19.483 "name": "BaseBdev3", 00:13:19.483 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:19.483 "is_configured": true, 00:13:19.483 "data_offset": 0, 00:13:19.483 "data_size": 65536 00:13:19.483 }, 00:13:19.483 { 00:13:19.483 "name": "BaseBdev4", 00:13:19.483 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:19.483 "is_configured": true, 00:13:19.483 "data_offset": 0, 00:13:19.483 "data_size": 65536 00:13:19.483 } 00:13:19.483 ] 00:13:19.483 }' 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:19.483 15:22:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.054 "name": "raid_bdev1", 00:13:20.054 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:20.054 "strip_size_kb": 0, 00:13:20.054 "state": "online", 00:13:20.054 "raid_level": "raid1", 00:13:20.054 "superblock": false, 00:13:20.054 "num_base_bdevs": 4, 00:13:20.054 "num_base_bdevs_discovered": 3, 00:13:20.054 "num_base_bdevs_operational": 3, 00:13:20.054 "base_bdevs_list": [ 00:13:20.054 { 00:13:20.054 "name": null, 00:13:20.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.054 "is_configured": false, 00:13:20.054 "data_offset": 0, 00:13:20.054 "data_size": 65536 00:13:20.054 }, 00:13:20.054 { 00:13:20.054 "name": "BaseBdev2", 00:13:20.054 "uuid": "8c56fd8a-ef4e-5516-9a5b-4c77900ff298", 00:13:20.054 "is_configured": true, 00:13:20.054 "data_offset": 0, 00:13:20.054 "data_size": 65536 00:13:20.054 }, 00:13:20.054 { 00:13:20.054 "name": "BaseBdev3", 00:13:20.054 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:20.054 "is_configured": true, 00:13:20.054 "data_offset": 0, 00:13:20.054 "data_size": 65536 00:13:20.054 }, 00:13:20.054 { 00:13:20.054 "name": "BaseBdev4", 00:13:20.054 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:20.054 "is_configured": true, 00:13:20.054 "data_offset": 0, 00:13:20.054 "data_size": 65536 00:13:20.054 } 00:13:20.054 ] 00:13:20.054 }' 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.054 [2024-11-10 15:22:26.272376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:20.054 [2024-11-10 15:22:26.278687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d0a250 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.054 15:22:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:20.054 [2024-11-10 15:22:26.280897] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.994 "name": "raid_bdev1", 00:13:20.994 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:20.994 "strip_size_kb": 0, 00:13:20.994 "state": "online", 00:13:20.994 "raid_level": "raid1", 00:13:20.994 "superblock": false, 00:13:20.994 "num_base_bdevs": 4, 00:13:20.994 "num_base_bdevs_discovered": 4, 00:13:20.994 "num_base_bdevs_operational": 4, 00:13:20.994 "process": { 00:13:20.994 "type": "rebuild", 00:13:20.994 "target": "spare", 00:13:20.994 "progress": { 00:13:20.994 "blocks": 20480, 00:13:20.994 "percent": 31 00:13:20.994 } 00:13:20.994 }, 00:13:20.994 "base_bdevs_list": [ 00:13:20.994 { 00:13:20.994 "name": "spare", 00:13:20.994 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:20.994 "is_configured": true, 00:13:20.994 "data_offset": 0, 00:13:20.994 "data_size": 65536 00:13:20.994 }, 00:13:20.994 { 00:13:20.994 "name": "BaseBdev2", 00:13:20.994 "uuid": "8c56fd8a-ef4e-5516-9a5b-4c77900ff298", 00:13:20.994 "is_configured": true, 00:13:20.994 "data_offset": 0, 00:13:20.994 "data_size": 65536 00:13:20.994 }, 00:13:20.994 { 00:13:20.994 "name": "BaseBdev3", 00:13:20.994 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:20.994 "is_configured": true, 00:13:20.994 "data_offset": 0, 00:13:20.994 "data_size": 65536 00:13:20.994 }, 00:13:20.994 { 00:13:20.994 "name": "BaseBdev4", 00:13:20.994 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:20.994 "is_configured": true, 00:13:20.994 "data_offset": 0, 00:13:20.994 "data_size": 65536 00:13:20.994 } 00:13:20.994 ] 00:13:20.994 }' 00:13:20.994 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.254 [2024-11-10 15:22:27.438923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:21.254 [2024-11-10 15:22:27.490533] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d0a250 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.254 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.254 "name": "raid_bdev1", 00:13:21.254 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:21.254 "strip_size_kb": 0, 00:13:21.254 "state": "online", 00:13:21.255 "raid_level": "raid1", 00:13:21.255 "superblock": false, 00:13:21.255 "num_base_bdevs": 4, 00:13:21.255 "num_base_bdevs_discovered": 3, 00:13:21.255 "num_base_bdevs_operational": 3, 00:13:21.255 "process": { 00:13:21.255 "type": "rebuild", 00:13:21.255 "target": "spare", 00:13:21.255 "progress": { 00:13:21.255 "blocks": 24576, 00:13:21.255 "percent": 37 00:13:21.255 } 00:13:21.255 }, 00:13:21.255 "base_bdevs_list": [ 00:13:21.255 { 00:13:21.255 "name": "spare", 00:13:21.255 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:21.255 "is_configured": true, 00:13:21.255 "data_offset": 0, 00:13:21.255 "data_size": 65536 00:13:21.255 }, 00:13:21.255 { 00:13:21.255 "name": null, 00:13:21.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.255 "is_configured": false, 00:13:21.255 "data_offset": 0, 00:13:21.255 "data_size": 65536 00:13:21.255 }, 00:13:21.255 { 00:13:21.255 "name": "BaseBdev3", 00:13:21.255 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:21.255 "is_configured": true, 00:13:21.255 "data_offset": 0, 00:13:21.255 "data_size": 65536 00:13:21.255 }, 00:13:21.255 { 00:13:21.255 "name": "BaseBdev4", 00:13:21.255 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:21.255 "is_configured": true, 00:13:21.255 "data_offset": 0, 00:13:21.255 "data_size": 65536 00:13:21.255 } 00:13:21.255 ] 00:13:21.255 }' 00:13:21.255 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.255 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.255 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=361 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.515 "name": "raid_bdev1", 00:13:21.515 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:21.515 "strip_size_kb": 0, 00:13:21.515 "state": "online", 00:13:21.515 "raid_level": "raid1", 00:13:21.515 "superblock": false, 00:13:21.515 "num_base_bdevs": 4, 00:13:21.515 "num_base_bdevs_discovered": 3, 00:13:21.515 "num_base_bdevs_operational": 3, 00:13:21.515 "process": { 00:13:21.515 "type": "rebuild", 00:13:21.515 "target": "spare", 00:13:21.515 "progress": { 00:13:21.515 "blocks": 26624, 00:13:21.515 "percent": 40 00:13:21.515 } 00:13:21.515 }, 00:13:21.515 "base_bdevs_list": [ 00:13:21.515 { 00:13:21.515 "name": "spare", 00:13:21.515 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:21.515 "is_configured": true, 00:13:21.515 "data_offset": 0, 00:13:21.515 "data_size": 65536 00:13:21.515 }, 00:13:21.515 { 00:13:21.515 "name": null, 00:13:21.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.515 "is_configured": false, 00:13:21.515 "data_offset": 0, 00:13:21.515 "data_size": 65536 00:13:21.515 }, 00:13:21.515 { 00:13:21.515 "name": "BaseBdev3", 00:13:21.515 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:21.515 "is_configured": true, 00:13:21.515 "data_offset": 0, 00:13:21.515 "data_size": 65536 00:13:21.515 }, 00:13:21.515 { 00:13:21.515 "name": "BaseBdev4", 00:13:21.515 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:21.515 "is_configured": true, 00:13:21.515 "data_offset": 0, 00:13:21.515 "data_size": 65536 00:13:21.515 } 00:13:21.515 ] 00:13:21.515 }' 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.515 15:22:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.536 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.537 "name": "raid_bdev1", 00:13:22.537 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:22.537 "strip_size_kb": 0, 00:13:22.537 "state": "online", 00:13:22.537 "raid_level": "raid1", 00:13:22.537 "superblock": false, 00:13:22.537 "num_base_bdevs": 4, 00:13:22.537 "num_base_bdevs_discovered": 3, 00:13:22.537 "num_base_bdevs_operational": 3, 00:13:22.537 "process": { 00:13:22.537 "type": "rebuild", 00:13:22.537 "target": "spare", 00:13:22.537 "progress": { 00:13:22.537 "blocks": 49152, 00:13:22.537 "percent": 75 00:13:22.537 } 00:13:22.537 }, 00:13:22.537 "base_bdevs_list": [ 00:13:22.537 { 00:13:22.537 "name": "spare", 00:13:22.537 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:22.537 "is_configured": true, 00:13:22.537 "data_offset": 0, 00:13:22.537 "data_size": 65536 00:13:22.537 }, 00:13:22.537 { 00:13:22.537 "name": null, 00:13:22.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.537 "is_configured": false, 00:13:22.537 "data_offset": 0, 00:13:22.537 "data_size": 65536 00:13:22.537 }, 00:13:22.537 { 00:13:22.537 "name": "BaseBdev3", 00:13:22.537 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:22.537 "is_configured": true, 00:13:22.537 "data_offset": 0, 00:13:22.537 "data_size": 65536 00:13:22.537 }, 00:13:22.537 { 00:13:22.537 "name": "BaseBdev4", 00:13:22.537 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:22.537 "is_configured": true, 00:13:22.537 "data_offset": 0, 00:13:22.537 "data_size": 65536 00:13:22.537 } 00:13:22.537 ] 00:13:22.537 }' 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.537 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.796 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.796 15:22:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.365 [2024-11-10 15:22:29.507389] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:23.365 [2024-11-10 15:22:29.507523] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:23.365 [2024-11-10 15:22:29.507575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.624 15:22:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.883 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.883 "name": "raid_bdev1", 00:13:23.883 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:23.883 "strip_size_kb": 0, 00:13:23.883 "state": "online", 00:13:23.883 "raid_level": "raid1", 00:13:23.883 "superblock": false, 00:13:23.883 "num_base_bdevs": 4, 00:13:23.883 "num_base_bdevs_discovered": 3, 00:13:23.883 "num_base_bdevs_operational": 3, 00:13:23.883 "base_bdevs_list": [ 00:13:23.883 { 00:13:23.883 "name": "spare", 00:13:23.883 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:23.883 "is_configured": true, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 }, 00:13:23.883 { 00:13:23.883 "name": null, 00:13:23.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.883 "is_configured": false, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 }, 00:13:23.883 { 00:13:23.883 "name": "BaseBdev3", 00:13:23.883 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:23.883 "is_configured": true, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 }, 00:13:23.883 { 00:13:23.883 "name": "BaseBdev4", 00:13:23.883 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:23.883 "is_configured": true, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 } 00:13:23.883 ] 00:13:23.883 }' 00:13:23.883 15:22:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.883 "name": "raid_bdev1", 00:13:23.883 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:23.883 "strip_size_kb": 0, 00:13:23.883 "state": "online", 00:13:23.883 "raid_level": "raid1", 00:13:23.883 "superblock": false, 00:13:23.883 "num_base_bdevs": 4, 00:13:23.883 "num_base_bdevs_discovered": 3, 00:13:23.883 "num_base_bdevs_operational": 3, 00:13:23.883 "base_bdevs_list": [ 00:13:23.883 { 00:13:23.883 "name": "spare", 00:13:23.883 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:23.883 "is_configured": true, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 }, 00:13:23.883 { 00:13:23.883 "name": null, 00:13:23.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.883 "is_configured": false, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 }, 00:13:23.883 { 00:13:23.883 "name": "BaseBdev3", 00:13:23.883 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:23.883 "is_configured": true, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 }, 00:13:23.883 { 00:13:23.883 "name": "BaseBdev4", 00:13:23.883 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:23.883 "is_configured": true, 00:13:23.883 "data_offset": 0, 00:13:23.883 "data_size": 65536 00:13:23.883 } 00:13:23.883 ] 00:13:23.883 }' 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.883 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.884 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.143 "name": "raid_bdev1", 00:13:24.143 "uuid": "1dbf16b5-38ac-41e7-b521-ecadf353fc1e", 00:13:24.143 "strip_size_kb": 0, 00:13:24.143 "state": "online", 00:13:24.143 "raid_level": "raid1", 00:13:24.143 "superblock": false, 00:13:24.143 "num_base_bdevs": 4, 00:13:24.143 "num_base_bdevs_discovered": 3, 00:13:24.143 "num_base_bdevs_operational": 3, 00:13:24.143 "base_bdevs_list": [ 00:13:24.143 { 00:13:24.143 "name": "spare", 00:13:24.143 "uuid": "d8e58281-e48d-57c3-bb4a-7831ccd9d97a", 00:13:24.143 "is_configured": true, 00:13:24.143 "data_offset": 0, 00:13:24.143 "data_size": 65536 00:13:24.143 }, 00:13:24.143 { 00:13:24.143 "name": null, 00:13:24.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.143 "is_configured": false, 00:13:24.143 "data_offset": 0, 00:13:24.143 "data_size": 65536 00:13:24.143 }, 00:13:24.143 { 00:13:24.143 "name": "BaseBdev3", 00:13:24.143 "uuid": "12d8b349-985c-5063-bd08-59867f1f042c", 00:13:24.143 "is_configured": true, 00:13:24.143 "data_offset": 0, 00:13:24.143 "data_size": 65536 00:13:24.143 }, 00:13:24.143 { 00:13:24.143 "name": "BaseBdev4", 00:13:24.143 "uuid": "1330795f-b023-57f0-9bbb-e5eb62a64a36", 00:13:24.143 "is_configured": true, 00:13:24.143 "data_offset": 0, 00:13:24.143 "data_size": 65536 00:13:24.143 } 00:13:24.143 ] 00:13:24.143 }' 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.143 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.403 [2024-11-10 15:22:30.698587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.403 [2024-11-10 15:22:30.698695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.403 [2024-11-10 15:22:30.698780] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.403 [2024-11-10 15:22:30.698874] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.403 [2024-11-10 15:22:30.698885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:24.403 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:24.663 /dev/nbd0 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.663 1+0 records in 00:13:24.663 1+0 records out 00:13:24.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224322 s, 18.3 MB/s 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:24.663 15:22:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.663 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:24.663 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:24.663 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.663 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:24.663 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:24.923 /dev/nbd1 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # break 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.923 1+0 records in 00:13:24.923 1+0 records out 00:13:24.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000480101 s, 8.5 MB/s 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:24.923 15:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.183 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 89574 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 89574 ']' 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 89574 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 89574 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 89574' 00:13:25.443 killing process with pid 89574 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@971 -- # kill 89574 00:13:25.443 Received shutdown signal, test time was about 60.000000 seconds 00:13:25.443 00:13:25.443 Latency(us) 00:13:25.443 [2024-11-10T15:22:31.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:25.443 [2024-11-10T15:22:31.806Z] =================================================================================================================== 00:13:25.443 [2024-11-10T15:22:31.806Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:25.443 [2024-11-10 15:22:31.793826] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:25.443 15:22:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@976 -- # wait 89574 00:13:25.703 [2024-11-10 15:22:31.886122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:25.963 00:13:25.963 real 0m15.917s 00:13:25.963 user 0m17.884s 00:13:25.963 sys 0m3.413s 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.963 ************************************ 00:13:25.963 END TEST raid_rebuild_test 00:13:25.963 ************************************ 00:13:25.963 15:22:32 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:25.963 15:22:32 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:25.963 15:22:32 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:25.963 15:22:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:25.963 ************************************ 00:13:25.963 START TEST raid_rebuild_test_sb 00:13:25.963 ************************************ 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true false true 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:25.963 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=90002 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 90002 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 90002 ']' 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:25.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:25.964 15:22:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.224 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:26.224 Zero copy mechanism will not be used. 00:13:26.224 [2024-11-10 15:22:32.389349] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:13:26.224 [2024-11-10 15:22:32.389442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90002 ] 00:13:26.224 [2024-11-10 15:22:32.521430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:26.224 [2024-11-10 15:22:32.558554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.224 [2024-11-10 15:22:32.584390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.484 [2024-11-10 15:22:32.627559] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:26.484 [2024-11-10 15:22:32.627691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.055 BaseBdev1_malloc 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.055 [2024-11-10 15:22:33.243454] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:27.055 [2024-11-10 15:22:33.243571] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.055 [2024-11-10 15:22:33.243615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:27.055 [2024-11-10 15:22:33.243660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.055 [2024-11-10 15:22:33.245736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.055 [2024-11-10 15:22:33.245814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:27.055 BaseBdev1 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.055 BaseBdev2_malloc 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.055 [2024-11-10 15:22:33.272144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:27.055 [2024-11-10 15:22:33.272232] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.055 [2024-11-10 15:22:33.272267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:27.055 [2024-11-10 15:22:33.272296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.055 [2024-11-10 15:22:33.274323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.055 [2024-11-10 15:22:33.274391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:27.055 BaseBdev2 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.055 BaseBdev3_malloc 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.055 [2024-11-10 15:22:33.300876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:27.055 [2024-11-10 15:22:33.300963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.055 [2024-11-10 15:22:33.300998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:27.055 [2024-11-10 15:22:33.301060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.055 [2024-11-10 15:22:33.303141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.055 [2024-11-10 15:22:33.303176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:27.055 BaseBdev3 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:27.055 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.056 BaseBdev4_malloc 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.056 [2024-11-10 15:22:33.339368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:27.056 [2024-11-10 15:22:33.339472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.056 [2024-11-10 15:22:33.339497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:27.056 [2024-11-10 15:22:33.339510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.056 [2024-11-10 15:22:33.341711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.056 [2024-11-10 15:22:33.341750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:27.056 BaseBdev4 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.056 spare_malloc 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.056 spare_delay 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.056 [2024-11-10 15:22:33.379980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:27.056 [2024-11-10 15:22:33.380048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:27.056 [2024-11-10 15:22:33.380067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:27.056 [2024-11-10 15:22:33.380078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:27.056 [2024-11-10 15:22:33.382123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:27.056 [2024-11-10 15:22:33.382203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:27.056 spare 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.056 [2024-11-10 15:22:33.392091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:27.056 [2024-11-10 15:22:33.393878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.056 [2024-11-10 15:22:33.393939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.056 [2024-11-10 15:22:33.393982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.056 [2024-11-10 15:22:33.394145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:27.056 [2024-11-10 15:22:33.394162] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:27.056 [2024-11-10 15:22:33.394384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:27.056 [2024-11-10 15:22:33.394530] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:27.056 [2024-11-10 15:22:33.394540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:27.056 [2024-11-10 15:22:33.394660] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.056 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.316 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.316 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.316 "name": "raid_bdev1", 00:13:27.316 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:27.316 "strip_size_kb": 0, 00:13:27.316 "state": "online", 00:13:27.316 "raid_level": "raid1", 00:13:27.316 "superblock": true, 00:13:27.316 "num_base_bdevs": 4, 00:13:27.316 "num_base_bdevs_discovered": 4, 00:13:27.316 "num_base_bdevs_operational": 4, 00:13:27.316 "base_bdevs_list": [ 00:13:27.316 { 00:13:27.316 "name": "BaseBdev1", 00:13:27.316 "uuid": "6ff153ae-400a-5ae9-a0b3-3d839c04dc56", 00:13:27.316 "is_configured": true, 00:13:27.316 "data_offset": 2048, 00:13:27.316 "data_size": 63488 00:13:27.316 }, 00:13:27.316 { 00:13:27.316 "name": "BaseBdev2", 00:13:27.316 "uuid": "2a59d7d6-e0ec-5773-985c-ea32ad9a259f", 00:13:27.316 "is_configured": true, 00:13:27.316 "data_offset": 2048, 00:13:27.316 "data_size": 63488 00:13:27.316 }, 00:13:27.316 { 00:13:27.316 "name": "BaseBdev3", 00:13:27.316 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:27.316 "is_configured": true, 00:13:27.316 "data_offset": 2048, 00:13:27.316 "data_size": 63488 00:13:27.316 }, 00:13:27.316 { 00:13:27.316 "name": "BaseBdev4", 00:13:27.316 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:27.316 "is_configured": true, 00:13:27.316 "data_offset": 2048, 00:13:27.316 "data_size": 63488 00:13:27.316 } 00:13:27.316 ] 00:13:27.316 }' 00:13:27.316 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.316 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 [2024-11-10 15:22:33.804423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.577 15:22:33 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:27.837 [2024-11-10 15:22:34.080330] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:13:27.837 /dev/nbd0 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.837 1+0 records in 00:13:27.837 1+0 records out 00:13:27.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247868 s, 16.5 MB/s 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:27.837 15:22:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:33.116 63488+0 records in 00:13:33.116 63488+0 records out 00:13:33.116 32505856 bytes (33 MB, 31 MiB) copied, 5.24579 s, 6.2 MB/s 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.116 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:33.376 [2024-11-10 15:22:39.594044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:33.376 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.377 [2024-11-10 15:22:39.631028] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.377 "name": "raid_bdev1", 00:13:33.377 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:33.377 "strip_size_kb": 0, 00:13:33.377 "state": "online", 00:13:33.377 "raid_level": "raid1", 00:13:33.377 "superblock": true, 00:13:33.377 "num_base_bdevs": 4, 00:13:33.377 "num_base_bdevs_discovered": 3, 00:13:33.377 "num_base_bdevs_operational": 3, 00:13:33.377 "base_bdevs_list": [ 00:13:33.377 { 00:13:33.377 "name": null, 00:13:33.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.377 "is_configured": false, 00:13:33.377 "data_offset": 0, 00:13:33.377 "data_size": 63488 00:13:33.377 }, 00:13:33.377 { 00:13:33.377 "name": "BaseBdev2", 00:13:33.377 "uuid": "2a59d7d6-e0ec-5773-985c-ea32ad9a259f", 00:13:33.377 "is_configured": true, 00:13:33.377 "data_offset": 2048, 00:13:33.377 "data_size": 63488 00:13:33.377 }, 00:13:33.377 { 00:13:33.377 "name": "BaseBdev3", 00:13:33.377 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:33.377 "is_configured": true, 00:13:33.377 "data_offset": 2048, 00:13:33.377 "data_size": 63488 00:13:33.377 }, 00:13:33.377 { 00:13:33.377 "name": "BaseBdev4", 00:13:33.377 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:33.377 "is_configured": true, 00:13:33.377 "data_offset": 2048, 00:13:33.377 "data_size": 63488 00:13:33.377 } 00:13:33.377 ] 00:13:33.377 }' 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.377 15:22:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.944 15:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:33.944 15:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.944 15:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.944 [2024-11-10 15:22:40.055157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:33.944 [2024-11-10 15:22:40.062677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3910 00:13:33.944 15:22:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.944 15:22:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:33.944 [2024-11-10 15:22:40.064938] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.882 "name": "raid_bdev1", 00:13:34.882 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:34.882 "strip_size_kb": 0, 00:13:34.882 "state": "online", 00:13:34.882 "raid_level": "raid1", 00:13:34.882 "superblock": true, 00:13:34.882 "num_base_bdevs": 4, 00:13:34.882 "num_base_bdevs_discovered": 4, 00:13:34.882 "num_base_bdevs_operational": 4, 00:13:34.882 "process": { 00:13:34.882 "type": "rebuild", 00:13:34.882 "target": "spare", 00:13:34.882 "progress": { 00:13:34.882 "blocks": 20480, 00:13:34.882 "percent": 32 00:13:34.882 } 00:13:34.882 }, 00:13:34.882 "base_bdevs_list": [ 00:13:34.882 { 00:13:34.882 "name": "spare", 00:13:34.882 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:34.882 "is_configured": true, 00:13:34.882 "data_offset": 2048, 00:13:34.882 "data_size": 63488 00:13:34.882 }, 00:13:34.882 { 00:13:34.882 "name": "BaseBdev2", 00:13:34.882 "uuid": "2a59d7d6-e0ec-5773-985c-ea32ad9a259f", 00:13:34.882 "is_configured": true, 00:13:34.882 "data_offset": 2048, 00:13:34.882 "data_size": 63488 00:13:34.882 }, 00:13:34.882 { 00:13:34.882 "name": "BaseBdev3", 00:13:34.882 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:34.882 "is_configured": true, 00:13:34.882 "data_offset": 2048, 00:13:34.882 "data_size": 63488 00:13:34.882 }, 00:13:34.882 { 00:13:34.882 "name": "BaseBdev4", 00:13:34.882 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:34.882 "is_configured": true, 00:13:34.882 "data_offset": 2048, 00:13:34.882 "data_size": 63488 00:13:34.882 } 00:13:34.882 ] 00:13:34.882 }' 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.882 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.882 [2024-11-10 15:22:41.207351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.142 [2024-11-10 15:22:41.275292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.142 [2024-11-10 15:22:41.275457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.143 [2024-11-10 15:22:41.275500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.143 [2024-11-10 15:22:41.275531] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.143 "name": "raid_bdev1", 00:13:35.143 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:35.143 "strip_size_kb": 0, 00:13:35.143 "state": "online", 00:13:35.143 "raid_level": "raid1", 00:13:35.143 "superblock": true, 00:13:35.143 "num_base_bdevs": 4, 00:13:35.143 "num_base_bdevs_discovered": 3, 00:13:35.143 "num_base_bdevs_operational": 3, 00:13:35.143 "base_bdevs_list": [ 00:13:35.143 { 00:13:35.143 "name": null, 00:13:35.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.143 "is_configured": false, 00:13:35.143 "data_offset": 0, 00:13:35.143 "data_size": 63488 00:13:35.143 }, 00:13:35.143 { 00:13:35.143 "name": "BaseBdev2", 00:13:35.143 "uuid": "2a59d7d6-e0ec-5773-985c-ea32ad9a259f", 00:13:35.143 "is_configured": true, 00:13:35.143 "data_offset": 2048, 00:13:35.143 "data_size": 63488 00:13:35.143 }, 00:13:35.143 { 00:13:35.143 "name": "BaseBdev3", 00:13:35.143 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:35.143 "is_configured": true, 00:13:35.143 "data_offset": 2048, 00:13:35.143 "data_size": 63488 00:13:35.143 }, 00:13:35.143 { 00:13:35.143 "name": "BaseBdev4", 00:13:35.143 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:35.143 "is_configured": true, 00:13:35.143 "data_offset": 2048, 00:13:35.143 "data_size": 63488 00:13:35.143 } 00:13:35.143 ] 00:13:35.143 }' 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.143 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.402 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.661 "name": "raid_bdev1", 00:13:35.661 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:35.661 "strip_size_kb": 0, 00:13:35.661 "state": "online", 00:13:35.661 "raid_level": "raid1", 00:13:35.661 "superblock": true, 00:13:35.661 "num_base_bdevs": 4, 00:13:35.661 "num_base_bdevs_discovered": 3, 00:13:35.661 "num_base_bdevs_operational": 3, 00:13:35.661 "base_bdevs_list": [ 00:13:35.661 { 00:13:35.661 "name": null, 00:13:35.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.661 "is_configured": false, 00:13:35.661 "data_offset": 0, 00:13:35.661 "data_size": 63488 00:13:35.661 }, 00:13:35.661 { 00:13:35.661 "name": "BaseBdev2", 00:13:35.661 "uuid": "2a59d7d6-e0ec-5773-985c-ea32ad9a259f", 00:13:35.661 "is_configured": true, 00:13:35.661 "data_offset": 2048, 00:13:35.661 "data_size": 63488 00:13:35.661 }, 00:13:35.661 { 00:13:35.661 "name": "BaseBdev3", 00:13:35.661 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:35.661 "is_configured": true, 00:13:35.661 "data_offset": 2048, 00:13:35.661 "data_size": 63488 00:13:35.661 }, 00:13:35.661 { 00:13:35.661 "name": "BaseBdev4", 00:13:35.661 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:35.661 "is_configured": true, 00:13:35.661 "data_offset": 2048, 00:13:35.661 "data_size": 63488 00:13:35.661 } 00:13:35.661 ] 00:13:35.661 }' 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.661 [2024-11-10 15:22:41.867494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.661 [2024-11-10 15:22:41.873732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca39e0 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.661 15:22:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:35.661 [2024-11-10 15:22:41.876003] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.600 "name": "raid_bdev1", 00:13:36.600 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:36.600 "strip_size_kb": 0, 00:13:36.600 "state": "online", 00:13:36.600 "raid_level": "raid1", 00:13:36.600 "superblock": true, 00:13:36.600 "num_base_bdevs": 4, 00:13:36.600 "num_base_bdevs_discovered": 4, 00:13:36.600 "num_base_bdevs_operational": 4, 00:13:36.600 "process": { 00:13:36.600 "type": "rebuild", 00:13:36.600 "target": "spare", 00:13:36.600 "progress": { 00:13:36.600 "blocks": 20480, 00:13:36.600 "percent": 32 00:13:36.600 } 00:13:36.600 }, 00:13:36.600 "base_bdevs_list": [ 00:13:36.600 { 00:13:36.600 "name": "spare", 00:13:36.600 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:36.600 "is_configured": true, 00:13:36.600 "data_offset": 2048, 00:13:36.600 "data_size": 63488 00:13:36.600 }, 00:13:36.600 { 00:13:36.600 "name": "BaseBdev2", 00:13:36.600 "uuid": "2a59d7d6-e0ec-5773-985c-ea32ad9a259f", 00:13:36.600 "is_configured": true, 00:13:36.600 "data_offset": 2048, 00:13:36.600 "data_size": 63488 00:13:36.600 }, 00:13:36.600 { 00:13:36.600 "name": "BaseBdev3", 00:13:36.600 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:36.600 "is_configured": true, 00:13:36.600 "data_offset": 2048, 00:13:36.600 "data_size": 63488 00:13:36.600 }, 00:13:36.600 { 00:13:36.600 "name": "BaseBdev4", 00:13:36.600 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:36.600 "is_configured": true, 00:13:36.600 "data_offset": 2048, 00:13:36.600 "data_size": 63488 00:13:36.600 } 00:13:36.600 ] 00:13:36.600 }' 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.600 15:22:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:36.860 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.860 [2024-11-10 15:22:43.018262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:36.860 [2024-11-10 15:22:43.185954] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca39e0 00:13:36.860 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.861 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.121 "name": "raid_bdev1", 00:13:37.121 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:37.121 "strip_size_kb": 0, 00:13:37.121 "state": "online", 00:13:37.121 "raid_level": "raid1", 00:13:37.121 "superblock": true, 00:13:37.121 "num_base_bdevs": 4, 00:13:37.121 "num_base_bdevs_discovered": 3, 00:13:37.121 "num_base_bdevs_operational": 3, 00:13:37.121 "process": { 00:13:37.121 "type": "rebuild", 00:13:37.121 "target": "spare", 00:13:37.121 "progress": { 00:13:37.121 "blocks": 24576, 00:13:37.121 "percent": 38 00:13:37.121 } 00:13:37.121 }, 00:13:37.121 "base_bdevs_list": [ 00:13:37.121 { 00:13:37.121 "name": "spare", 00:13:37.121 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": null, 00:13:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.121 "is_configured": false, 00:13:37.121 "data_offset": 0, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": "BaseBdev3", 00:13:37.121 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 }, 00:13:37.121 { 00:13:37.121 "name": "BaseBdev4", 00:13:37.121 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:37.121 "is_configured": true, 00:13:37.121 "data_offset": 2048, 00:13:37.121 "data_size": 63488 00:13:37.121 } 00:13:37.121 ] 00:13:37.121 }' 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=377 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.121 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.121 "name": "raid_bdev1", 00:13:37.121 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:37.121 "strip_size_kb": 0, 00:13:37.121 "state": "online", 00:13:37.121 "raid_level": "raid1", 00:13:37.121 "superblock": true, 00:13:37.121 "num_base_bdevs": 4, 00:13:37.121 "num_base_bdevs_discovered": 3, 00:13:37.121 "num_base_bdevs_operational": 3, 00:13:37.121 "process": { 00:13:37.121 "type": "rebuild", 00:13:37.121 "target": "spare", 00:13:37.121 "progress": { 00:13:37.121 "blocks": 26624, 00:13:37.122 "percent": 41 00:13:37.122 } 00:13:37.122 }, 00:13:37.122 "base_bdevs_list": [ 00:13:37.122 { 00:13:37.122 "name": "spare", 00:13:37.122 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:37.122 "is_configured": true, 00:13:37.122 "data_offset": 2048, 00:13:37.122 "data_size": 63488 00:13:37.122 }, 00:13:37.122 { 00:13:37.122 "name": null, 00:13:37.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.122 "is_configured": false, 00:13:37.122 "data_offset": 0, 00:13:37.122 "data_size": 63488 00:13:37.122 }, 00:13:37.122 { 00:13:37.122 "name": "BaseBdev3", 00:13:37.122 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:37.122 "is_configured": true, 00:13:37.122 "data_offset": 2048, 00:13:37.122 "data_size": 63488 00:13:37.122 }, 00:13:37.122 { 00:13:37.122 "name": "BaseBdev4", 00:13:37.122 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:37.122 "is_configured": true, 00:13:37.122 "data_offset": 2048, 00:13:37.122 "data_size": 63488 00:13:37.122 } 00:13:37.122 ] 00:13:37.122 }' 00:13:37.122 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.122 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.122 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.122 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.122 15:22:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.503 "name": "raid_bdev1", 00:13:38.503 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:38.503 "strip_size_kb": 0, 00:13:38.503 "state": "online", 00:13:38.503 "raid_level": "raid1", 00:13:38.503 "superblock": true, 00:13:38.503 "num_base_bdevs": 4, 00:13:38.503 "num_base_bdevs_discovered": 3, 00:13:38.503 "num_base_bdevs_operational": 3, 00:13:38.503 "process": { 00:13:38.503 "type": "rebuild", 00:13:38.503 "target": "spare", 00:13:38.503 "progress": { 00:13:38.503 "blocks": 49152, 00:13:38.503 "percent": 77 00:13:38.503 } 00:13:38.503 }, 00:13:38.503 "base_bdevs_list": [ 00:13:38.503 { 00:13:38.503 "name": "spare", 00:13:38.503 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:38.503 "is_configured": true, 00:13:38.503 "data_offset": 2048, 00:13:38.503 "data_size": 63488 00:13:38.503 }, 00:13:38.503 { 00:13:38.503 "name": null, 00:13:38.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.503 "is_configured": false, 00:13:38.503 "data_offset": 0, 00:13:38.503 "data_size": 63488 00:13:38.503 }, 00:13:38.503 { 00:13:38.503 "name": "BaseBdev3", 00:13:38.503 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:38.503 "is_configured": true, 00:13:38.503 "data_offset": 2048, 00:13:38.503 "data_size": 63488 00:13:38.503 }, 00:13:38.503 { 00:13:38.503 "name": "BaseBdev4", 00:13:38.503 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:38.503 "is_configured": true, 00:13:38.503 "data_offset": 2048, 00:13:38.503 "data_size": 63488 00:13:38.503 } 00:13:38.503 ] 00:13:38.503 }' 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.503 15:22:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:38.763 [2024-11-10 15:22:45.101448] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:38.763 [2024-11-10 15:22:45.101535] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:38.763 [2024-11-10 15:22:45.101649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.333 "name": "raid_bdev1", 00:13:39.333 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:39.333 "strip_size_kb": 0, 00:13:39.333 "state": "online", 00:13:39.333 "raid_level": "raid1", 00:13:39.333 "superblock": true, 00:13:39.333 "num_base_bdevs": 4, 00:13:39.333 "num_base_bdevs_discovered": 3, 00:13:39.333 "num_base_bdevs_operational": 3, 00:13:39.333 "base_bdevs_list": [ 00:13:39.333 { 00:13:39.333 "name": "spare", 00:13:39.333 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:39.333 "is_configured": true, 00:13:39.333 "data_offset": 2048, 00:13:39.333 "data_size": 63488 00:13:39.333 }, 00:13:39.333 { 00:13:39.333 "name": null, 00:13:39.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.333 "is_configured": false, 00:13:39.333 "data_offset": 0, 00:13:39.333 "data_size": 63488 00:13:39.333 }, 00:13:39.333 { 00:13:39.333 "name": "BaseBdev3", 00:13:39.333 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:39.333 "is_configured": true, 00:13:39.333 "data_offset": 2048, 00:13:39.333 "data_size": 63488 00:13:39.333 }, 00:13:39.333 { 00:13:39.333 "name": "BaseBdev4", 00:13:39.333 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:39.333 "is_configured": true, 00:13:39.333 "data_offset": 2048, 00:13:39.333 "data_size": 63488 00:13:39.333 } 00:13:39.333 ] 00:13:39.333 }' 00:13:39.333 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.593 "name": "raid_bdev1", 00:13:39.593 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:39.593 "strip_size_kb": 0, 00:13:39.593 "state": "online", 00:13:39.593 "raid_level": "raid1", 00:13:39.593 "superblock": true, 00:13:39.593 "num_base_bdevs": 4, 00:13:39.593 "num_base_bdevs_discovered": 3, 00:13:39.593 "num_base_bdevs_operational": 3, 00:13:39.593 "base_bdevs_list": [ 00:13:39.593 { 00:13:39.593 "name": "spare", 00:13:39.593 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:39.593 "is_configured": true, 00:13:39.593 "data_offset": 2048, 00:13:39.593 "data_size": 63488 00:13:39.593 }, 00:13:39.593 { 00:13:39.593 "name": null, 00:13:39.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.593 "is_configured": false, 00:13:39.593 "data_offset": 0, 00:13:39.593 "data_size": 63488 00:13:39.593 }, 00:13:39.593 { 00:13:39.593 "name": "BaseBdev3", 00:13:39.593 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:39.593 "is_configured": true, 00:13:39.593 "data_offset": 2048, 00:13:39.593 "data_size": 63488 00:13:39.593 }, 00:13:39.593 { 00:13:39.593 "name": "BaseBdev4", 00:13:39.593 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:39.593 "is_configured": true, 00:13:39.593 "data_offset": 2048, 00:13:39.593 "data_size": 63488 00:13:39.593 } 00:13:39.593 ] 00:13:39.593 }' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.593 "name": "raid_bdev1", 00:13:39.593 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:39.593 "strip_size_kb": 0, 00:13:39.593 "state": "online", 00:13:39.593 "raid_level": "raid1", 00:13:39.593 "superblock": true, 00:13:39.593 "num_base_bdevs": 4, 00:13:39.593 "num_base_bdevs_discovered": 3, 00:13:39.593 "num_base_bdevs_operational": 3, 00:13:39.593 "base_bdevs_list": [ 00:13:39.593 { 00:13:39.593 "name": "spare", 00:13:39.593 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:39.593 "is_configured": true, 00:13:39.593 "data_offset": 2048, 00:13:39.593 "data_size": 63488 00:13:39.593 }, 00:13:39.593 { 00:13:39.593 "name": null, 00:13:39.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.593 "is_configured": false, 00:13:39.593 "data_offset": 0, 00:13:39.593 "data_size": 63488 00:13:39.593 }, 00:13:39.593 { 00:13:39.593 "name": "BaseBdev3", 00:13:39.593 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:39.593 "is_configured": true, 00:13:39.593 "data_offset": 2048, 00:13:39.593 "data_size": 63488 00:13:39.593 }, 00:13:39.593 { 00:13:39.593 "name": "BaseBdev4", 00:13:39.593 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:39.593 "is_configured": true, 00:13:39.593 "data_offset": 2048, 00:13:39.593 "data_size": 63488 00:13:39.593 } 00:13:39.593 ] 00:13:39.593 }' 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.593 15:22:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.853 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:39.853 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.853 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.113 [2024-11-10 15:22:46.220187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:40.113 [2024-11-10 15:22:46.220275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:40.113 [2024-11-10 15:22:46.220410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:40.113 [2024-11-10 15:22:46.220517] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:40.113 [2024-11-10 15:22:46.220574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.113 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:40.374 /dev/nbd0 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.374 1+0 records in 00:13:40.374 1+0 records out 00:13:40.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238811 s, 17.2 MB/s 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.374 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:40.374 /dev/nbd1 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:40.634 1+0 records in 00:13:40.634 1+0 records out 00:13:40.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415387 s, 9.9 MB/s 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.634 15:22:46 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:40.894 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 [2024-11-10 15:22:47.344401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.154 [2024-11-10 15:22:47.344465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.154 [2024-11-10 15:22:47.344494] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:41.154 [2024-11-10 15:22:47.344504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.154 [2024-11-10 15:22:47.346911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.154 [2024-11-10 15:22:47.346946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.154 [2024-11-10 15:22:47.347033] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:41.154 [2024-11-10 15:22:47.347075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:41.154 [2024-11-10 15:22:47.347214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.154 [2024-11-10 15:22:47.347317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.154 spare 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.154 [2024-11-10 15:22:47.447409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.154 [2024-11-10 15:22:47.447442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:41.154 [2024-11-10 15:22:47.447740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2090 00:13:41.154 [2024-11-10 15:22:47.447928] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.154 [2024-11-10 15:22:47.447957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.154 [2024-11-10 15:22:47.448094] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.154 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.155 "name": "raid_bdev1", 00:13:41.155 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:41.155 "strip_size_kb": 0, 00:13:41.155 "state": "online", 00:13:41.155 "raid_level": "raid1", 00:13:41.155 "superblock": true, 00:13:41.155 "num_base_bdevs": 4, 00:13:41.155 "num_base_bdevs_discovered": 3, 00:13:41.155 "num_base_bdevs_operational": 3, 00:13:41.155 "base_bdevs_list": [ 00:13:41.155 { 00:13:41.155 "name": "spare", 00:13:41.155 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:41.155 "is_configured": true, 00:13:41.155 "data_offset": 2048, 00:13:41.155 "data_size": 63488 00:13:41.155 }, 00:13:41.155 { 00:13:41.155 "name": null, 00:13:41.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.155 "is_configured": false, 00:13:41.155 "data_offset": 2048, 00:13:41.155 "data_size": 63488 00:13:41.155 }, 00:13:41.155 { 00:13:41.155 "name": "BaseBdev3", 00:13:41.155 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:41.155 "is_configured": true, 00:13:41.155 "data_offset": 2048, 00:13:41.155 "data_size": 63488 00:13:41.155 }, 00:13:41.155 { 00:13:41.155 "name": "BaseBdev4", 00:13:41.155 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:41.155 "is_configured": true, 00:13:41.155 "data_offset": 2048, 00:13:41.155 "data_size": 63488 00:13:41.155 } 00:13:41.155 ] 00:13:41.155 }' 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.155 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.724 "name": "raid_bdev1", 00:13:41.724 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:41.724 "strip_size_kb": 0, 00:13:41.724 "state": "online", 00:13:41.724 "raid_level": "raid1", 00:13:41.724 "superblock": true, 00:13:41.724 "num_base_bdevs": 4, 00:13:41.724 "num_base_bdevs_discovered": 3, 00:13:41.724 "num_base_bdevs_operational": 3, 00:13:41.724 "base_bdevs_list": [ 00:13:41.724 { 00:13:41.724 "name": "spare", 00:13:41.724 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:41.724 "is_configured": true, 00:13:41.724 "data_offset": 2048, 00:13:41.724 "data_size": 63488 00:13:41.724 }, 00:13:41.724 { 00:13:41.724 "name": null, 00:13:41.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.724 "is_configured": false, 00:13:41.724 "data_offset": 2048, 00:13:41.724 "data_size": 63488 00:13:41.724 }, 00:13:41.724 { 00:13:41.724 "name": "BaseBdev3", 00:13:41.724 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:41.724 "is_configured": true, 00:13:41.724 "data_offset": 2048, 00:13:41.724 "data_size": 63488 00:13:41.724 }, 00:13:41.724 { 00:13:41.724 "name": "BaseBdev4", 00:13:41.724 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:41.724 "is_configured": true, 00:13:41.724 "data_offset": 2048, 00:13:41.724 "data_size": 63488 00:13:41.724 } 00:13:41.724 ] 00:13:41.724 }' 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.724 15:22:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.724 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.725 [2024-11-10 15:22:48.060652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.725 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.984 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.984 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.984 "name": "raid_bdev1", 00:13:41.984 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:41.984 "strip_size_kb": 0, 00:13:41.984 "state": "online", 00:13:41.984 "raid_level": "raid1", 00:13:41.984 "superblock": true, 00:13:41.984 "num_base_bdevs": 4, 00:13:41.984 "num_base_bdevs_discovered": 2, 00:13:41.984 "num_base_bdevs_operational": 2, 00:13:41.984 "base_bdevs_list": [ 00:13:41.984 { 00:13:41.984 "name": null, 00:13:41.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.984 "is_configured": false, 00:13:41.984 "data_offset": 0, 00:13:41.984 "data_size": 63488 00:13:41.984 }, 00:13:41.984 { 00:13:41.984 "name": null, 00:13:41.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.984 "is_configured": false, 00:13:41.984 "data_offset": 2048, 00:13:41.984 "data_size": 63488 00:13:41.984 }, 00:13:41.984 { 00:13:41.984 "name": "BaseBdev3", 00:13:41.984 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:41.984 "is_configured": true, 00:13:41.984 "data_offset": 2048, 00:13:41.984 "data_size": 63488 00:13:41.984 }, 00:13:41.984 { 00:13:41.984 "name": "BaseBdev4", 00:13:41.984 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:41.984 "is_configured": true, 00:13:41.984 "data_offset": 2048, 00:13:41.984 "data_size": 63488 00:13:41.984 } 00:13:41.984 ] 00:13:41.984 }' 00:13:41.984 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.984 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.243 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.244 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.244 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.244 [2024-11-10 15:22:48.520782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.244 [2024-11-10 15:22:48.520926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:42.244 [2024-11-10 15:22:48.520939] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:42.244 [2024-11-10 15:22:48.520972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.244 [2024-11-10 15:22:48.527970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2160 00:13:42.244 15:22:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.244 15:22:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:42.244 [2024-11-10 15:22:48.530172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.181 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.442 "name": "raid_bdev1", 00:13:43.442 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:43.442 "strip_size_kb": 0, 00:13:43.442 "state": "online", 00:13:43.442 "raid_level": "raid1", 00:13:43.442 "superblock": true, 00:13:43.442 "num_base_bdevs": 4, 00:13:43.442 "num_base_bdevs_discovered": 3, 00:13:43.442 "num_base_bdevs_operational": 3, 00:13:43.442 "process": { 00:13:43.442 "type": "rebuild", 00:13:43.442 "target": "spare", 00:13:43.442 "progress": { 00:13:43.442 "blocks": 20480, 00:13:43.442 "percent": 32 00:13:43.442 } 00:13:43.442 }, 00:13:43.442 "base_bdevs_list": [ 00:13:43.442 { 00:13:43.442 "name": "spare", 00:13:43.442 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:43.442 "is_configured": true, 00:13:43.442 "data_offset": 2048, 00:13:43.442 "data_size": 63488 00:13:43.442 }, 00:13:43.442 { 00:13:43.442 "name": null, 00:13:43.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.442 "is_configured": false, 00:13:43.442 "data_offset": 2048, 00:13:43.442 "data_size": 63488 00:13:43.442 }, 00:13:43.442 { 00:13:43.442 "name": "BaseBdev3", 00:13:43.442 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:43.442 "is_configured": true, 00:13:43.442 "data_offset": 2048, 00:13:43.442 "data_size": 63488 00:13:43.442 }, 00:13:43.442 { 00:13:43.442 "name": "BaseBdev4", 00:13:43.442 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:43.442 "is_configured": true, 00:13:43.442 "data_offset": 2048, 00:13:43.442 "data_size": 63488 00:13:43.442 } 00:13:43.442 ] 00:13:43.442 }' 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.442 [2024-11-10 15:22:49.686215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.442 [2024-11-10 15:22:49.739593] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.442 [2024-11-10 15:22:49.739650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.442 [2024-11-10 15:22:49.739670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.442 [2024-11-10 15:22:49.739678] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.442 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.443 "name": "raid_bdev1", 00:13:43.443 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:43.443 "strip_size_kb": 0, 00:13:43.443 "state": "online", 00:13:43.443 "raid_level": "raid1", 00:13:43.443 "superblock": true, 00:13:43.443 "num_base_bdevs": 4, 00:13:43.443 "num_base_bdevs_discovered": 2, 00:13:43.443 "num_base_bdevs_operational": 2, 00:13:43.443 "base_bdevs_list": [ 00:13:43.443 { 00:13:43.443 "name": null, 00:13:43.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.443 "is_configured": false, 00:13:43.443 "data_offset": 0, 00:13:43.443 "data_size": 63488 00:13:43.443 }, 00:13:43.443 { 00:13:43.443 "name": null, 00:13:43.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.443 "is_configured": false, 00:13:43.443 "data_offset": 2048, 00:13:43.443 "data_size": 63488 00:13:43.443 }, 00:13:43.443 { 00:13:43.443 "name": "BaseBdev3", 00:13:43.443 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:43.443 "is_configured": true, 00:13:43.443 "data_offset": 2048, 00:13:43.443 "data_size": 63488 00:13:43.443 }, 00:13:43.443 { 00:13:43.443 "name": "BaseBdev4", 00:13:43.443 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:43.443 "is_configured": true, 00:13:43.443 "data_offset": 2048, 00:13:43.443 "data_size": 63488 00:13:43.443 } 00:13:43.443 ] 00:13:43.443 }' 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.443 15:22:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 15:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:44.012 15:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.012 15:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.012 [2024-11-10 15:22:50.202106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:44.012 [2024-11-10 15:22:50.202169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.012 [2024-11-10 15:22:50.202200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:44.012 [2024-11-10 15:22:50.202210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.012 [2024-11-10 15:22:50.202699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.012 [2024-11-10 15:22:50.202726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:44.012 [2024-11-10 15:22:50.202822] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:44.012 [2024-11-10 15:22:50.202841] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:44.012 [2024-11-10 15:22:50.202859] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:44.012 [2024-11-10 15:22:50.202891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.012 [2024-11-10 15:22:50.209448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc2230 00:13:44.012 spare 00:13:44.012 15:22:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.012 15:22:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:44.012 [2024-11-10 15:22:50.211676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.951 "name": "raid_bdev1", 00:13:44.951 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:44.951 "strip_size_kb": 0, 00:13:44.951 "state": "online", 00:13:44.951 "raid_level": "raid1", 00:13:44.951 "superblock": true, 00:13:44.951 "num_base_bdevs": 4, 00:13:44.951 "num_base_bdevs_discovered": 3, 00:13:44.951 "num_base_bdevs_operational": 3, 00:13:44.951 "process": { 00:13:44.951 "type": "rebuild", 00:13:44.951 "target": "spare", 00:13:44.951 "progress": { 00:13:44.951 "blocks": 20480, 00:13:44.951 "percent": 32 00:13:44.951 } 00:13:44.951 }, 00:13:44.951 "base_bdevs_list": [ 00:13:44.951 { 00:13:44.951 "name": "spare", 00:13:44.951 "uuid": "9960f72c-a9cd-5a36-aa25-a337574c3d33", 00:13:44.951 "is_configured": true, 00:13:44.951 "data_offset": 2048, 00:13:44.951 "data_size": 63488 00:13:44.951 }, 00:13:44.951 { 00:13:44.951 "name": null, 00:13:44.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.951 "is_configured": false, 00:13:44.951 "data_offset": 2048, 00:13:44.951 "data_size": 63488 00:13:44.951 }, 00:13:44.951 { 00:13:44.951 "name": "BaseBdev3", 00:13:44.951 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:44.951 "is_configured": true, 00:13:44.951 "data_offset": 2048, 00:13:44.951 "data_size": 63488 00:13:44.951 }, 00:13:44.951 { 00:13:44.951 "name": "BaseBdev4", 00:13:44.951 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:44.951 "is_configured": true, 00:13:44.951 "data_offset": 2048, 00:13:44.951 "data_size": 63488 00:13:44.951 } 00:13:44.951 ] 00:13:44.951 }' 00:13:44.951 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.211 [2024-11-10 15:22:51.367183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.211 [2024-11-10 15:22:51.421377] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:45.211 [2024-11-10 15:22:51.421437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.211 [2024-11-10 15:22:51.421453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:45.211 [2024-11-10 15:22:51.421463] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.211 "name": "raid_bdev1", 00:13:45.211 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:45.211 "strip_size_kb": 0, 00:13:45.211 "state": "online", 00:13:45.211 "raid_level": "raid1", 00:13:45.211 "superblock": true, 00:13:45.211 "num_base_bdevs": 4, 00:13:45.211 "num_base_bdevs_discovered": 2, 00:13:45.211 "num_base_bdevs_operational": 2, 00:13:45.211 "base_bdevs_list": [ 00:13:45.211 { 00:13:45.211 "name": null, 00:13:45.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.211 "is_configured": false, 00:13:45.211 "data_offset": 0, 00:13:45.211 "data_size": 63488 00:13:45.211 }, 00:13:45.211 { 00:13:45.211 "name": null, 00:13:45.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.211 "is_configured": false, 00:13:45.211 "data_offset": 2048, 00:13:45.211 "data_size": 63488 00:13:45.211 }, 00:13:45.211 { 00:13:45.211 "name": "BaseBdev3", 00:13:45.211 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:45.211 "is_configured": true, 00:13:45.211 "data_offset": 2048, 00:13:45.211 "data_size": 63488 00:13:45.211 }, 00:13:45.211 { 00:13:45.211 "name": "BaseBdev4", 00:13:45.211 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:45.211 "is_configured": true, 00:13:45.211 "data_offset": 2048, 00:13:45.211 "data_size": 63488 00:13:45.211 } 00:13:45.211 ] 00:13:45.211 }' 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.211 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.471 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.471 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.471 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:45.471 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:45.471 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.729 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.729 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.729 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.729 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.729 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.729 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.729 "name": "raid_bdev1", 00:13:45.729 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:45.729 "strip_size_kb": 0, 00:13:45.729 "state": "online", 00:13:45.729 "raid_level": "raid1", 00:13:45.729 "superblock": true, 00:13:45.729 "num_base_bdevs": 4, 00:13:45.729 "num_base_bdevs_discovered": 2, 00:13:45.729 "num_base_bdevs_operational": 2, 00:13:45.729 "base_bdevs_list": [ 00:13:45.729 { 00:13:45.729 "name": null, 00:13:45.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.729 "is_configured": false, 00:13:45.729 "data_offset": 0, 00:13:45.729 "data_size": 63488 00:13:45.729 }, 00:13:45.729 { 00:13:45.729 "name": null, 00:13:45.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.729 "is_configured": false, 00:13:45.729 "data_offset": 2048, 00:13:45.729 "data_size": 63488 00:13:45.729 }, 00:13:45.729 { 00:13:45.729 "name": "BaseBdev3", 00:13:45.729 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:45.729 "is_configured": true, 00:13:45.729 "data_offset": 2048, 00:13:45.729 "data_size": 63488 00:13:45.729 }, 00:13:45.729 { 00:13:45.729 "name": "BaseBdev4", 00:13:45.729 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:45.729 "is_configured": true, 00:13:45.729 "data_offset": 2048, 00:13:45.730 "data_size": 63488 00:13:45.730 } 00:13:45.730 ] 00:13:45.730 }' 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:45.730 [2024-11-10 15:22:51.968163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:45.730 [2024-11-10 15:22:51.968222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.730 [2024-11-10 15:22:51.968244] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:45.730 [2024-11-10 15:22:51.968257] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.730 [2024-11-10 15:22:51.968761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.730 [2024-11-10 15:22:51.968796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.730 [2024-11-10 15:22:51.968867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:45.730 [2024-11-10 15:22:51.968887] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:45.730 [2024-11-10 15:22:51.968897] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:45.730 [2024-11-10 15:22:51.968911] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:45.730 BaseBdev1 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.730 15:22:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 15:22:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.668 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.668 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.668 "name": "raid_bdev1", 00:13:46.668 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:46.668 "strip_size_kb": 0, 00:13:46.668 "state": "online", 00:13:46.668 "raid_level": "raid1", 00:13:46.668 "superblock": true, 00:13:46.668 "num_base_bdevs": 4, 00:13:46.668 "num_base_bdevs_discovered": 2, 00:13:46.668 "num_base_bdevs_operational": 2, 00:13:46.668 "base_bdevs_list": [ 00:13:46.668 { 00:13:46.668 "name": null, 00:13:46.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.668 "is_configured": false, 00:13:46.668 "data_offset": 0, 00:13:46.668 "data_size": 63488 00:13:46.668 }, 00:13:46.668 { 00:13:46.668 "name": null, 00:13:46.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.668 "is_configured": false, 00:13:46.668 "data_offset": 2048, 00:13:46.668 "data_size": 63488 00:13:46.668 }, 00:13:46.668 { 00:13:46.668 "name": "BaseBdev3", 00:13:46.668 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:46.668 "is_configured": true, 00:13:46.668 "data_offset": 2048, 00:13:46.668 "data_size": 63488 00:13:46.668 }, 00:13:46.668 { 00:13:46.668 "name": "BaseBdev4", 00:13:46.668 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:46.668 "is_configured": true, 00:13:46.668 "data_offset": 2048, 00:13:46.668 "data_size": 63488 00:13:46.668 } 00:13:46.668 ] 00:13:46.668 }' 00:13:46.668 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.668 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.237 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.237 "name": "raid_bdev1", 00:13:47.237 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:47.237 "strip_size_kb": 0, 00:13:47.237 "state": "online", 00:13:47.237 "raid_level": "raid1", 00:13:47.237 "superblock": true, 00:13:47.237 "num_base_bdevs": 4, 00:13:47.237 "num_base_bdevs_discovered": 2, 00:13:47.237 "num_base_bdevs_operational": 2, 00:13:47.237 "base_bdevs_list": [ 00:13:47.237 { 00:13:47.237 "name": null, 00:13:47.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.237 "is_configured": false, 00:13:47.238 "data_offset": 0, 00:13:47.238 "data_size": 63488 00:13:47.238 }, 00:13:47.238 { 00:13:47.238 "name": null, 00:13:47.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.238 "is_configured": false, 00:13:47.238 "data_offset": 2048, 00:13:47.238 "data_size": 63488 00:13:47.238 }, 00:13:47.238 { 00:13:47.238 "name": "BaseBdev3", 00:13:47.238 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:47.238 "is_configured": true, 00:13:47.238 "data_offset": 2048, 00:13:47.238 "data_size": 63488 00:13:47.238 }, 00:13:47.238 { 00:13:47.238 "name": "BaseBdev4", 00:13:47.238 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:47.238 "is_configured": true, 00:13:47.238 "data_offset": 2048, 00:13:47.238 "data_size": 63488 00:13:47.238 } 00:13:47.238 ] 00:13:47.238 }' 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.238 [2024-11-10 15:22:53.484641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.238 [2024-11-10 15:22:53.484810] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:47.238 [2024-11-10 15:22:53.484827] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:47.238 request: 00:13:47.238 { 00:13:47.238 "base_bdev": "BaseBdev1", 00:13:47.238 "raid_bdev": "raid_bdev1", 00:13:47.238 "method": "bdev_raid_add_base_bdev", 00:13:47.238 "req_id": 1 00:13:47.238 } 00:13:47.238 Got JSON-RPC error response 00:13:47.238 response: 00:13:47.238 { 00:13:47.238 "code": -22, 00:13:47.238 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:47.238 } 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:47.238 15:22:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.177 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.436 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.436 "name": "raid_bdev1", 00:13:48.436 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:48.436 "strip_size_kb": 0, 00:13:48.436 "state": "online", 00:13:48.436 "raid_level": "raid1", 00:13:48.436 "superblock": true, 00:13:48.436 "num_base_bdevs": 4, 00:13:48.436 "num_base_bdevs_discovered": 2, 00:13:48.436 "num_base_bdevs_operational": 2, 00:13:48.436 "base_bdevs_list": [ 00:13:48.436 { 00:13:48.436 "name": null, 00:13:48.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.436 "is_configured": false, 00:13:48.436 "data_offset": 0, 00:13:48.436 "data_size": 63488 00:13:48.436 }, 00:13:48.436 { 00:13:48.436 "name": null, 00:13:48.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.436 "is_configured": false, 00:13:48.436 "data_offset": 2048, 00:13:48.436 "data_size": 63488 00:13:48.436 }, 00:13:48.436 { 00:13:48.436 "name": "BaseBdev3", 00:13:48.436 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:48.436 "is_configured": true, 00:13:48.436 "data_offset": 2048, 00:13:48.436 "data_size": 63488 00:13:48.436 }, 00:13:48.436 { 00:13:48.436 "name": "BaseBdev4", 00:13:48.436 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:48.436 "is_configured": true, 00:13:48.436 "data_offset": 2048, 00:13:48.436 "data_size": 63488 00:13:48.436 } 00:13:48.436 ] 00:13:48.437 }' 00:13:48.437 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.437 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.696 "name": "raid_bdev1", 00:13:48.696 "uuid": "3474364e-f9e3-46f2-9a61-4acba294b251", 00:13:48.696 "strip_size_kb": 0, 00:13:48.696 "state": "online", 00:13:48.696 "raid_level": "raid1", 00:13:48.696 "superblock": true, 00:13:48.696 "num_base_bdevs": 4, 00:13:48.696 "num_base_bdevs_discovered": 2, 00:13:48.696 "num_base_bdevs_operational": 2, 00:13:48.696 "base_bdevs_list": [ 00:13:48.696 { 00:13:48.696 "name": null, 00:13:48.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.696 "is_configured": false, 00:13:48.696 "data_offset": 0, 00:13:48.696 "data_size": 63488 00:13:48.696 }, 00:13:48.696 { 00:13:48.696 "name": null, 00:13:48.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.696 "is_configured": false, 00:13:48.696 "data_offset": 2048, 00:13:48.696 "data_size": 63488 00:13:48.696 }, 00:13:48.696 { 00:13:48.696 "name": "BaseBdev3", 00:13:48.696 "uuid": "ebde77cd-5cbd-554a-8f62-21b209aac852", 00:13:48.696 "is_configured": true, 00:13:48.696 "data_offset": 2048, 00:13:48.696 "data_size": 63488 00:13:48.696 }, 00:13:48.696 { 00:13:48.696 "name": "BaseBdev4", 00:13:48.696 "uuid": "886bcd8f-8af8-559b-a48a-9d4d7e4c71af", 00:13:48.696 "is_configured": true, 00:13:48.696 "data_offset": 2048, 00:13:48.696 "data_size": 63488 00:13:48.696 } 00:13:48.696 ] 00:13:48.696 }' 00:13:48.696 15:22:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.696 15:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.696 15:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.696 15:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.696 15:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 90002 00:13:48.696 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 90002 ']' 00:13:48.696 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 90002 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90002 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:48.957 killing process with pid 90002 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90002' 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 90002 00:13:48.957 Received shutdown signal, test time was about 60.000000 seconds 00:13:48.957 00:13:48.957 Latency(us) 00:13:48.957 [2024-11-10T15:22:55.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.957 [2024-11-10T15:22:55.320Z] =================================================================================================================== 00:13:48.957 [2024-11-10T15:22:55.320Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.957 [2024-11-10 15:22:55.083201] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.957 [2024-11-10 15:22:55.083334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.957 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 90002 00:13:48.957 [2024-11-10 15:22:55.083436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.957 [2024-11-10 15:22:55.083446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:48.957 [2024-11-10 15:22:55.174954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:49.217 00:13:49.217 real 0m23.212s 00:13:49.217 user 0m28.141s 00:13:49.217 sys 0m3.827s 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 ************************************ 00:13:49.217 END TEST raid_rebuild_test_sb 00:13:49.217 ************************************ 00:13:49.217 15:22:55 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:49.217 15:22:55 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:49.217 15:22:55 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:49.217 15:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:49.217 ************************************ 00:13:49.217 START TEST raid_rebuild_test_io 00:13:49.217 ************************************ 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 false true true 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.217 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=90739 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 90739 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@833 -- # '[' -z 90739 ']' 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:49.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:49.477 15:22:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.477 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:49.477 Zero copy mechanism will not be used. 00:13:49.477 [2024-11-10 15:22:55.669726] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:13:49.478 [2024-11-10 15:22:55.669868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90739 ] 00:13:49.478 [2024-11-10 15:22:55.803474] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:49.738 [2024-11-10 15:22:55.840970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:49.738 [2024-11-10 15:22:55.880182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.738 [2024-11-10 15:22:55.955965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.738 [2024-11-10 15:22:55.956023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@866 -- # return 0 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.307 BaseBdev1_malloc 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.307 [2024-11-10 15:22:56.590687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:50.307 [2024-11-10 15:22:56.590762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.307 [2024-11-10 15:22:56.590791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:50.307 [2024-11-10 15:22:56.590806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.307 [2024-11-10 15:22:56.593326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.307 [2024-11-10 15:22:56.593361] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:50.307 BaseBdev1 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.307 BaseBdev2_malloc 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.307 [2024-11-10 15:22:56.625113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:50.307 [2024-11-10 15:22:56.625176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.307 [2024-11-10 15:22:56.625195] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:50.307 [2024-11-10 15:22:56.625207] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.307 [2024-11-10 15:22:56.627574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.307 [2024-11-10 15:22:56.627608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:50.307 BaseBdev2 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.307 BaseBdev3_malloc 00:13:50.307 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.308 [2024-11-10 15:22:56.659553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:50.308 [2024-11-10 15:22:56.659600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.308 [2024-11-10 15:22:56.659620] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:50.308 [2024-11-10 15:22:56.659631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.308 [2024-11-10 15:22:56.662006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.308 [2024-11-10 15:22:56.662064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:50.308 BaseBdev3 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.308 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 BaseBdev4_malloc 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 [2024-11-10 15:22:56.711508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:50.568 [2024-11-10 15:22:56.711599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.568 [2024-11-10 15:22:56.711634] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:50.568 [2024-11-10 15:22:56.711659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.568 [2024-11-10 15:22:56.715671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.568 [2024-11-10 15:22:56.715718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:50.568 BaseBdev4 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 spare_malloc 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 spare_delay 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 [2024-11-10 15:22:56.758317] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.568 [2024-11-10 15:22:56.758370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.568 [2024-11-10 15:22:56.758388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:50.568 [2024-11-10 15:22:56.758399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.568 [2024-11-10 15:22:56.760730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.568 [2024-11-10 15:22:56.760764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.568 spare 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 [2024-11-10 15:22:56.770405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.568 [2024-11-10 15:22:56.772525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.568 [2024-11-10 15:22:56.772594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:50.568 [2024-11-10 15:22:56.772639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.568 [2024-11-10 15:22:56.772709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:13:50.568 [2024-11-10 15:22:56.772732] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:50.568 [2024-11-10 15:22:56.773041] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:50.568 [2024-11-10 15:22:56.773211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:13:50.568 [2024-11-10 15:22:56.773226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:13:50.568 [2024-11-10 15:22:56.773350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.568 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.568 "name": "raid_bdev1", 00:13:50.568 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:50.569 "strip_size_kb": 0, 00:13:50.569 "state": "online", 00:13:50.569 "raid_level": "raid1", 00:13:50.569 "superblock": false, 00:13:50.569 "num_base_bdevs": 4, 00:13:50.569 "num_base_bdevs_discovered": 4, 00:13:50.569 "num_base_bdevs_operational": 4, 00:13:50.569 "base_bdevs_list": [ 00:13:50.569 { 00:13:50.569 "name": "BaseBdev1", 00:13:50.569 "uuid": "848ab4a9-5968-53e8-a22d-2cbe3e811f74", 00:13:50.569 "is_configured": true, 00:13:50.569 "data_offset": 0, 00:13:50.569 "data_size": 65536 00:13:50.569 }, 00:13:50.569 { 00:13:50.569 "name": "BaseBdev2", 00:13:50.569 "uuid": "cee786b7-a8f9-505d-a1ba-a4baddfb2c49", 00:13:50.569 "is_configured": true, 00:13:50.569 "data_offset": 0, 00:13:50.569 "data_size": 65536 00:13:50.569 }, 00:13:50.569 { 00:13:50.569 "name": "BaseBdev3", 00:13:50.569 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:50.569 "is_configured": true, 00:13:50.569 "data_offset": 0, 00:13:50.569 "data_size": 65536 00:13:50.569 }, 00:13:50.569 { 00:13:50.569 "name": "BaseBdev4", 00:13:50.569 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:50.569 "is_configured": true, 00:13:50.569 "data_offset": 0, 00:13:50.569 "data_size": 65536 00:13:50.569 } 00:13:50.569 ] 00:13:50.569 }' 00:13:50.569 15:22:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.569 15:22:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.139 [2024-11-10 15:22:57.230721] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.139 [2024-11-10 15:22:57.294477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.139 "name": "raid_bdev1", 00:13:51.139 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:51.139 "strip_size_kb": 0, 00:13:51.139 "state": "online", 00:13:51.139 "raid_level": "raid1", 00:13:51.139 "superblock": false, 00:13:51.139 "num_base_bdevs": 4, 00:13:51.139 "num_base_bdevs_discovered": 3, 00:13:51.139 "num_base_bdevs_operational": 3, 00:13:51.139 "base_bdevs_list": [ 00:13:51.139 { 00:13:51.139 "name": null, 00:13:51.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.139 "is_configured": false, 00:13:51.139 "data_offset": 0, 00:13:51.139 "data_size": 65536 00:13:51.139 }, 00:13:51.139 { 00:13:51.139 "name": "BaseBdev2", 00:13:51.139 "uuid": "cee786b7-a8f9-505d-a1ba-a4baddfb2c49", 00:13:51.139 "is_configured": true, 00:13:51.139 "data_offset": 0, 00:13:51.139 "data_size": 65536 00:13:51.139 }, 00:13:51.139 { 00:13:51.139 "name": "BaseBdev3", 00:13:51.139 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:51.139 "is_configured": true, 00:13:51.139 "data_offset": 0, 00:13:51.139 "data_size": 65536 00:13:51.139 }, 00:13:51.139 { 00:13:51.139 "name": "BaseBdev4", 00:13:51.139 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:51.139 "is_configured": true, 00:13:51.139 "data_offset": 0, 00:13:51.139 "data_size": 65536 00:13:51.139 } 00:13:51.139 ] 00:13:51.139 }' 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.139 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.139 [2024-11-10 15:22:57.385934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:13:51.139 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.139 Zero copy mechanism will not be used. 00:13:51.139 Running I/O for 60 seconds... 00:13:51.399 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.399 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.399 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.399 [2024-11-10 15:22:57.701658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.399 15:22:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.399 15:22:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:51.399 [2024-11-10 15:22:57.745446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:13:51.399 [2024-11-10 15:22:57.747846] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.660 [2024-11-10 15:22:57.881608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.919 [2024-11-10 15:22:58.087287] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.919 [2024-11-10 15:22:58.087538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:52.179 [2024-11-10 15:22:58.341397] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:52.179 [2024-11-10 15:22:58.342294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:52.439 198.00 IOPS, 594.00 MiB/s [2024-11-10T15:22:58.802Z] [2024-11-10 15:22:58.548843] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.439 "name": "raid_bdev1", 00:13:52.439 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:52.439 "strip_size_kb": 0, 00:13:52.439 "state": "online", 00:13:52.439 "raid_level": "raid1", 00:13:52.439 "superblock": false, 00:13:52.439 "num_base_bdevs": 4, 00:13:52.439 "num_base_bdevs_discovered": 4, 00:13:52.439 "num_base_bdevs_operational": 4, 00:13:52.439 "process": { 00:13:52.439 "type": "rebuild", 00:13:52.439 "target": "spare", 00:13:52.439 "progress": { 00:13:52.439 "blocks": 12288, 00:13:52.439 "percent": 18 00:13:52.439 } 00:13:52.439 }, 00:13:52.439 "base_bdevs_list": [ 00:13:52.439 { 00:13:52.439 "name": "spare", 00:13:52.439 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:52.439 "is_configured": true, 00:13:52.439 "data_offset": 0, 00:13:52.439 "data_size": 65536 00:13:52.439 }, 00:13:52.439 { 00:13:52.439 "name": "BaseBdev2", 00:13:52.439 "uuid": "cee786b7-a8f9-505d-a1ba-a4baddfb2c49", 00:13:52.439 "is_configured": true, 00:13:52.439 "data_offset": 0, 00:13:52.439 "data_size": 65536 00:13:52.439 }, 00:13:52.439 { 00:13:52.439 "name": "BaseBdev3", 00:13:52.439 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:52.439 "is_configured": true, 00:13:52.439 "data_offset": 0, 00:13:52.439 "data_size": 65536 00:13:52.439 }, 00:13:52.439 { 00:13:52.439 "name": "BaseBdev4", 00:13:52.439 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:52.439 "is_configured": true, 00:13:52.439 "data_offset": 0, 00:13:52.439 "data_size": 65536 00:13:52.439 } 00:13:52.439 ] 00:13:52.439 }' 00:13:52.439 [2024-11-10 15:22:58.787209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:52.439 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.700 [2024-11-10 15:22:58.867658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.700 [2024-11-10 15:22:58.919543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:52.700 [2024-11-10 15:22:58.922456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.700 [2024-11-10 15:22:58.922500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:52.700 [2024-11-10 15:22:58.922511] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:52.700 [2024-11-10 15:22:58.952401] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.700 "name": "raid_bdev1", 00:13:52.700 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:52.700 "strip_size_kb": 0, 00:13:52.700 "state": "online", 00:13:52.700 "raid_level": "raid1", 00:13:52.700 "superblock": false, 00:13:52.700 "num_base_bdevs": 4, 00:13:52.700 "num_base_bdevs_discovered": 3, 00:13:52.700 "num_base_bdevs_operational": 3, 00:13:52.700 "base_bdevs_list": [ 00:13:52.700 { 00:13:52.700 "name": null, 00:13:52.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.700 "is_configured": false, 00:13:52.700 "data_offset": 0, 00:13:52.700 "data_size": 65536 00:13:52.700 }, 00:13:52.700 { 00:13:52.700 "name": "BaseBdev2", 00:13:52.700 "uuid": "cee786b7-a8f9-505d-a1ba-a4baddfb2c49", 00:13:52.700 "is_configured": true, 00:13:52.700 "data_offset": 0, 00:13:52.700 "data_size": 65536 00:13:52.700 }, 00:13:52.700 { 00:13:52.700 "name": "BaseBdev3", 00:13:52.700 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:52.700 "is_configured": true, 00:13:52.700 "data_offset": 0, 00:13:52.700 "data_size": 65536 00:13:52.700 }, 00:13:52.700 { 00:13:52.700 "name": "BaseBdev4", 00:13:52.700 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:52.700 "is_configured": true, 00:13:52.700 "data_offset": 0, 00:13:52.700 "data_size": 65536 00:13:52.700 } 00:13:52.700 ] 00:13:52.700 }' 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.700 15:22:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.270 178.00 IOPS, 534.00 MiB/s [2024-11-10T15:22:59.633Z] 15:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.270 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.270 "name": "raid_bdev1", 00:13:53.270 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:53.270 "strip_size_kb": 0, 00:13:53.270 "state": "online", 00:13:53.270 "raid_level": "raid1", 00:13:53.270 "superblock": false, 00:13:53.270 "num_base_bdevs": 4, 00:13:53.270 "num_base_bdevs_discovered": 3, 00:13:53.270 "num_base_bdevs_operational": 3, 00:13:53.270 "base_bdevs_list": [ 00:13:53.270 { 00:13:53.270 "name": null, 00:13:53.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.271 "is_configured": false, 00:13:53.271 "data_offset": 0, 00:13:53.271 "data_size": 65536 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev2", 00:13:53.271 "uuid": "cee786b7-a8f9-505d-a1ba-a4baddfb2c49", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 0, 00:13:53.271 "data_size": 65536 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev3", 00:13:53.271 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 0, 00:13:53.271 "data_size": 65536 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev4", 00:13:53.271 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 0, 00:13:53.271 "data_size": 65536 00:13:53.271 } 00:13:53.271 ] 00:13:53.271 }' 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.271 [2024-11-10 15:22:59.511964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.271 15:22:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:53.271 [2024-11-10 15:22:59.571204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:13:53.271 [2024-11-10 15:22:59.573556] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:53.533 [2024-11-10 15:22:59.683887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.533 [2024-11-10 15:22:59.685842] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:53.798 [2024-11-10 15:22:59.913180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:53.798 [2024-11-10 15:22:59.914334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:54.075 [2024-11-10 15:23:00.261045] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:54.075 [2024-11-10 15:23:00.263240] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:54.363 157.33 IOPS, 472.00 MiB/s [2024-11-10T15:23:00.726Z] [2024-11-10 15:23:00.482558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.363 "name": "raid_bdev1", 00:13:54.363 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:54.363 "strip_size_kb": 0, 00:13:54.363 "state": "online", 00:13:54.363 "raid_level": "raid1", 00:13:54.363 "superblock": false, 00:13:54.363 "num_base_bdevs": 4, 00:13:54.363 "num_base_bdevs_discovered": 4, 00:13:54.363 "num_base_bdevs_operational": 4, 00:13:54.363 "process": { 00:13:54.363 "type": "rebuild", 00:13:54.363 "target": "spare", 00:13:54.363 "progress": { 00:13:54.363 "blocks": 10240, 00:13:54.363 "percent": 15 00:13:54.363 } 00:13:54.363 }, 00:13:54.363 "base_bdevs_list": [ 00:13:54.363 { 00:13:54.363 "name": "spare", 00:13:54.363 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:54.363 "is_configured": true, 00:13:54.363 "data_offset": 0, 00:13:54.363 "data_size": 65536 00:13:54.363 }, 00:13:54.363 { 00:13:54.363 "name": "BaseBdev2", 00:13:54.363 "uuid": "cee786b7-a8f9-505d-a1ba-a4baddfb2c49", 00:13:54.363 "is_configured": true, 00:13:54.363 "data_offset": 0, 00:13:54.363 "data_size": 65536 00:13:54.363 }, 00:13:54.363 { 00:13:54.363 "name": "BaseBdev3", 00:13:54.363 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:54.363 "is_configured": true, 00:13:54.363 "data_offset": 0, 00:13:54.363 "data_size": 65536 00:13:54.363 }, 00:13:54.363 { 00:13:54.363 "name": "BaseBdev4", 00:13:54.363 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:54.363 "is_configured": true, 00:13:54.363 "data_offset": 0, 00:13:54.363 "data_size": 65536 00:13:54.363 } 00:13:54.363 ] 00:13:54.363 }' 00:13:54.363 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.364 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.364 [2024-11-10 15:23:00.703077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:54.364 [2024-11-10 15:23:00.713312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:54.627 [2024-11-10 15:23:00.956571] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:13:54.627 [2024-11-10 15:23:00.956620] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:13:54.627 [2024-11-10 15:23:00.959381] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.627 15:23:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.888 "name": "raid_bdev1", 00:13:54.888 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:54.888 "strip_size_kb": 0, 00:13:54.888 "state": "online", 00:13:54.888 "raid_level": "raid1", 00:13:54.888 "superblock": false, 00:13:54.888 "num_base_bdevs": 4, 00:13:54.888 "num_base_bdevs_discovered": 3, 00:13:54.888 "num_base_bdevs_operational": 3, 00:13:54.888 "process": { 00:13:54.888 "type": "rebuild", 00:13:54.888 "target": "spare", 00:13:54.888 "progress": { 00:13:54.888 "blocks": 16384, 00:13:54.888 "percent": 25 00:13:54.888 } 00:13:54.888 }, 00:13:54.888 "base_bdevs_list": [ 00:13:54.888 { 00:13:54.888 "name": "spare", 00:13:54.888 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:54.888 "is_configured": true, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 }, 00:13:54.888 { 00:13:54.888 "name": null, 00:13:54.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.888 "is_configured": false, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 }, 00:13:54.888 { 00:13:54.888 "name": "BaseBdev3", 00:13:54.888 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:54.888 "is_configured": true, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 }, 00:13:54.888 { 00:13:54.888 "name": "BaseBdev4", 00:13:54.888 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:54.888 "is_configured": true, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 } 00:13:54.888 ] 00:13:54.888 }' 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=395 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.888 "name": "raid_bdev1", 00:13:54.888 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:54.888 "strip_size_kb": 0, 00:13:54.888 "state": "online", 00:13:54.888 "raid_level": "raid1", 00:13:54.888 "superblock": false, 00:13:54.888 "num_base_bdevs": 4, 00:13:54.888 "num_base_bdevs_discovered": 3, 00:13:54.888 "num_base_bdevs_operational": 3, 00:13:54.888 "process": { 00:13:54.888 "type": "rebuild", 00:13:54.888 "target": "spare", 00:13:54.888 "progress": { 00:13:54.888 "blocks": 18432, 00:13:54.888 "percent": 28 00:13:54.888 } 00:13:54.888 }, 00:13:54.888 "base_bdevs_list": [ 00:13:54.888 { 00:13:54.888 "name": "spare", 00:13:54.888 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:54.888 "is_configured": true, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 }, 00:13:54.888 { 00:13:54.888 "name": null, 00:13:54.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:54.888 "is_configured": false, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 }, 00:13:54.888 { 00:13:54.888 "name": "BaseBdev3", 00:13:54.888 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:54.888 "is_configured": true, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 }, 00:13:54.888 { 00:13:54.888 "name": "BaseBdev4", 00:13:54.888 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:54.888 "is_configured": true, 00:13:54.888 "data_offset": 0, 00:13:54.888 "data_size": 65536 00:13:54.888 } 00:13:54.888 ] 00:13:54.888 }' 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.888 [2024-11-10 15:23:01.206739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:54.888 15:23:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.148 [2024-11-10 15:23:01.327750] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:56.088 130.75 IOPS, 392.25 MiB/s [2024-11-10T15:23:02.451Z] [2024-11-10 15:23:02.098971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.088 "name": "raid_bdev1", 00:13:56.088 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:56.088 "strip_size_kb": 0, 00:13:56.088 "state": "online", 00:13:56.088 "raid_level": "raid1", 00:13:56.088 "superblock": false, 00:13:56.088 "num_base_bdevs": 4, 00:13:56.088 "num_base_bdevs_discovered": 3, 00:13:56.088 "num_base_bdevs_operational": 3, 00:13:56.088 "process": { 00:13:56.088 "type": "rebuild", 00:13:56.088 "target": "spare", 00:13:56.088 "progress": { 00:13:56.088 "blocks": 34816, 00:13:56.088 "percent": 53 00:13:56.088 } 00:13:56.088 }, 00:13:56.088 "base_bdevs_list": [ 00:13:56.088 { 00:13:56.088 "name": "spare", 00:13:56.088 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:56.088 "is_configured": true, 00:13:56.088 "data_offset": 0, 00:13:56.088 "data_size": 65536 00:13:56.088 }, 00:13:56.088 { 00:13:56.088 "name": null, 00:13:56.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.088 "is_configured": false, 00:13:56.088 "data_offset": 0, 00:13:56.088 "data_size": 65536 00:13:56.088 }, 00:13:56.088 { 00:13:56.088 "name": "BaseBdev3", 00:13:56.088 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:56.088 "is_configured": true, 00:13:56.088 "data_offset": 0, 00:13:56.088 "data_size": 65536 00:13:56.088 }, 00:13:56.088 { 00:13:56.088 "name": "BaseBdev4", 00:13:56.088 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:56.088 "is_configured": true, 00:13:56.088 "data_offset": 0, 00:13:56.088 "data_size": 65536 00:13:56.088 } 00:13:56.088 ] 00:13:56.088 }' 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:56.088 15:23:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:56.088 115.60 IOPS, 346.80 MiB/s [2024-11-10T15:23:02.451Z] [2024-11-10 15:23:02.444991] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:56.348 [2024-11-10 15:23:02.568980] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:56.607 [2024-11-10 15:23:02.794193] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:56.866 [2024-11-10 15:23:03.026080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:57.126 [2024-11-10 15:23:03.242617] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.126 103.50 IOPS, 310.50 MiB/s [2024-11-10T15:23:03.489Z] 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:57.126 "name": "raid_bdev1", 00:13:57.126 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:57.126 "strip_size_kb": 0, 00:13:57.126 "state": "online", 00:13:57.126 "raid_level": "raid1", 00:13:57.126 "superblock": false, 00:13:57.126 "num_base_bdevs": 4, 00:13:57.126 "num_base_bdevs_discovered": 3, 00:13:57.126 "num_base_bdevs_operational": 3, 00:13:57.126 "process": { 00:13:57.126 "type": "rebuild", 00:13:57.126 "target": "spare", 00:13:57.126 "progress": { 00:13:57.126 "blocks": 51200, 00:13:57.126 "percent": 78 00:13:57.126 } 00:13:57.126 }, 00:13:57.126 "base_bdevs_list": [ 00:13:57.126 { 00:13:57.126 "name": "spare", 00:13:57.126 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 0, 00:13:57.126 "data_size": 65536 00:13:57.126 }, 00:13:57.126 { 00:13:57.126 "name": null, 00:13:57.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.126 "is_configured": false, 00:13:57.126 "data_offset": 0, 00:13:57.126 "data_size": 65536 00:13:57.126 }, 00:13:57.126 { 00:13:57.126 "name": "BaseBdev3", 00:13:57.126 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 0, 00:13:57.126 "data_size": 65536 00:13:57.126 }, 00:13:57.126 { 00:13:57.126 "name": "BaseBdev4", 00:13:57.126 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 0, 00:13:57.126 "data_size": 65536 00:13:57.126 } 00:13:57.126 ] 00:13:57.126 }' 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.126 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:57.386 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.386 15:23:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:57.386 [2024-11-10 15:23:03.579907] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:57.956 [2024-11-10 15:23:04.011233] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:57.956 [2024-11-10 15:23:04.116322] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:57.956 [2024-11-10 15:23:04.121999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.216 94.43 IOPS, 283.29 MiB/s [2024-11-10T15:23:04.579Z] 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.216 "name": "raid_bdev1", 00:13:58.216 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:58.216 "strip_size_kb": 0, 00:13:58.216 "state": "online", 00:13:58.216 "raid_level": "raid1", 00:13:58.216 "superblock": false, 00:13:58.216 "num_base_bdevs": 4, 00:13:58.216 "num_base_bdevs_discovered": 3, 00:13:58.216 "num_base_bdevs_operational": 3, 00:13:58.216 "base_bdevs_list": [ 00:13:58.216 { 00:13:58.216 "name": "spare", 00:13:58.216 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:58.216 "is_configured": true, 00:13:58.216 "data_offset": 0, 00:13:58.216 "data_size": 65536 00:13:58.216 }, 00:13:58.216 { 00:13:58.216 "name": null, 00:13:58.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.216 "is_configured": false, 00:13:58.216 "data_offset": 0, 00:13:58.216 "data_size": 65536 00:13:58.216 }, 00:13:58.216 { 00:13:58.216 "name": "BaseBdev3", 00:13:58.216 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:58.216 "is_configured": true, 00:13:58.216 "data_offset": 0, 00:13:58.216 "data_size": 65536 00:13:58.216 }, 00:13:58.216 { 00:13:58.216 "name": "BaseBdev4", 00:13:58.216 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:58.216 "is_configured": true, 00:13:58.216 "data_offset": 0, 00:13:58.216 "data_size": 65536 00:13:58.216 } 00:13:58.216 ] 00:13:58.216 }' 00:13:58.216 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.476 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.477 "name": "raid_bdev1", 00:13:58.477 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:58.477 "strip_size_kb": 0, 00:13:58.477 "state": "online", 00:13:58.477 "raid_level": "raid1", 00:13:58.477 "superblock": false, 00:13:58.477 "num_base_bdevs": 4, 00:13:58.477 "num_base_bdevs_discovered": 3, 00:13:58.477 "num_base_bdevs_operational": 3, 00:13:58.477 "base_bdevs_list": [ 00:13:58.477 { 00:13:58.477 "name": "spare", 00:13:58.477 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:58.477 "is_configured": true, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 }, 00:13:58.477 { 00:13:58.477 "name": null, 00:13:58.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.477 "is_configured": false, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 }, 00:13:58.477 { 00:13:58.477 "name": "BaseBdev3", 00:13:58.477 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:58.477 "is_configured": true, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 }, 00:13:58.477 { 00:13:58.477 "name": "BaseBdev4", 00:13:58.477 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:58.477 "is_configured": true, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 } 00:13:58.477 ] 00:13:58.477 }' 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.477 "name": "raid_bdev1", 00:13:58.477 "uuid": "2476c456-e0e8-4a5f-a55f-3b5f9709d9ec", 00:13:58.477 "strip_size_kb": 0, 00:13:58.477 "state": "online", 00:13:58.477 "raid_level": "raid1", 00:13:58.477 "superblock": false, 00:13:58.477 "num_base_bdevs": 4, 00:13:58.477 "num_base_bdevs_discovered": 3, 00:13:58.477 "num_base_bdevs_operational": 3, 00:13:58.477 "base_bdevs_list": [ 00:13:58.477 { 00:13:58.477 "name": "spare", 00:13:58.477 "uuid": "7e55c77c-838e-5bc2-95d9-a76ab44a5591", 00:13:58.477 "is_configured": true, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 }, 00:13:58.477 { 00:13:58.477 "name": null, 00:13:58.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.477 "is_configured": false, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 }, 00:13:58.477 { 00:13:58.477 "name": "BaseBdev3", 00:13:58.477 "uuid": "ce3622f1-67e8-5900-859f-1f3fa7d70340", 00:13:58.477 "is_configured": true, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 }, 00:13:58.477 { 00:13:58.477 "name": "BaseBdev4", 00:13:58.477 "uuid": "56bfd57c-154e-53d7-9840-a9e3f4d99320", 00:13:58.477 "is_configured": true, 00:13:58.477 "data_offset": 0, 00:13:58.477 "data_size": 65536 00:13:58.477 } 00:13:58.477 ] 00:13:58.477 }' 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.477 15:23:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.047 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:59.047 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.047 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.047 [2024-11-10 15:23:05.215948] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.047 [2024-11-10 15:23:05.215992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.047 00:13:59.047 Latency(us) 00:13:59.047 [2024-11-10T15:23:05.410Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.047 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:59.047 raid_bdev1 : 7.93 86.79 260.38 0.00 0.00 15598.55 283.82 119727.58 00:13:59.047 [2024-11-10T15:23:05.410Z] =================================================================================================================== 00:13:59.047 [2024-11-10T15:23:05.410Z] Total : 86.79 260.38 0.00 0.00 15598.55 283.82 119727.58 00:13:59.047 [2024-11-10 15:23:05.319944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.047 [2024-11-10 15:23:05.320000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.047 [2024-11-10 15:23:05.320143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.047 [2024-11-10 15:23:05.320169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:13:59.047 { 00:13:59.047 "results": [ 00:13:59.047 { 00:13:59.047 "job": "raid_bdev1", 00:13:59.047 "core_mask": "0x1", 00:13:59.047 "workload": "randrw", 00:13:59.047 "percentage": 50, 00:13:59.047 "status": "finished", 00:13:59.048 "queue_depth": 2, 00:13:59.048 "io_size": 3145728, 00:13:59.048 "runtime": 7.926937, 00:13:59.048 "iops": 86.79266657474382, 00:13:59.048 "mibps": 260.3779997242315, 00:13:59.048 "io_failed": 0, 00:13:59.048 "io_timeout": 0, 00:13:59.048 "avg_latency_us": 15598.554665250469, 00:13:59.048 "min_latency_us": 283.82463174409486, 00:13:59.048 "max_latency_us": 119727.58302100583 00:13:59.048 } 00:13:59.048 ], 00:13:59.048 "core_count": 1 00:13:59.048 } 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.048 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:59.307 /dev/nbd0 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.308 1+0 records in 00:13:59.308 1+0 records out 00:13:59.308 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452427 s, 9.1 MB/s 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.308 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:59.568 /dev/nbd1 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:59.568 1+0 records in 00:13:59.568 1+0 records out 00:13:59.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382829 s, 10.7 MB/s 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.568 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:59.828 15:23:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.828 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:00.087 /dev/nbd1 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # local i 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # break 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:00.088 1+0 records in 00:14:00.088 1+0 records out 00:14:00.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358964 s, 11.4 MB/s 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # size=4096 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # return 0 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:00.088 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.347 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 90739 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@952 -- # '[' -z 90739 ']' 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # kill -0 90739 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # uname 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.607 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 90739 00:14:00.608 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:00.608 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:00.608 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 90739' 00:14:00.608 killing process with pid 90739 00:14:00.608 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@971 -- # kill 90739 00:14:00.608 Received shutdown signal, test time was about 9.552343 seconds 00:14:00.608 00:14:00.608 Latency(us) 00:14:00.608 [2024-11-10T15:23:06.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:00.608 [2024-11-10T15:23:06.971Z] =================================================================================================================== 00:14:00.608 [2024-11-10T15:23:06.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:00.608 [2024-11-10 15:23:06.941388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.608 15:23:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@976 -- # wait 90739 00:14:00.868 [2024-11-10 15:23:07.024130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:01.129 00:14:01.129 real 0m11.773s 00:14:01.129 user 0m15.003s 00:14:01.129 sys 0m1.890s 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.129 ************************************ 00:14:01.129 END TEST raid_rebuild_test_io 00:14:01.129 ************************************ 00:14:01.129 15:23:07 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:01.129 15:23:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:01.129 15:23:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:01.129 15:23:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:01.129 ************************************ 00:14:01.129 START TEST raid_rebuild_test_sb_io 00:14:01.129 ************************************ 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 4 true true true 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=91133 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 91133 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@833 -- # '[' -z 91133 ']' 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.129 15:23:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.388 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:01.388 Zero copy mechanism will not be used. 00:14:01.388 [2024-11-10 15:23:07.517231] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:14:01.389 [2024-11-10 15:23:07.517351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91133 ] 00:14:01.389 [2024-11-10 15:23:07.650686] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:01.389 [2024-11-10 15:23:07.690875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.389 [2024-11-10 15:23:07.727943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.648 [2024-11-10 15:23:07.803463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.648 [2024-11-10 15:23:07.803505] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@866 -- # return 0 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.218 BaseBdev1_malloc 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.218 [2024-11-10 15:23:08.361664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:02.218 [2024-11-10 15:23:08.361741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.218 [2024-11-10 15:23:08.361774] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:02.218 [2024-11-10 15:23:08.361793] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.218 [2024-11-10 15:23:08.364296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.218 [2024-11-10 15:23:08.364334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:02.218 BaseBdev1 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.218 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.218 BaseBdev2_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-11-10 15:23:08.396286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:02.219 [2024-11-10 15:23:08.396360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.219 [2024-11-10 15:23:08.396380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:02.219 [2024-11-10 15:23:08.396392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.219 [2024-11-10 15:23:08.398771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.219 [2024-11-10 15:23:08.398808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:02.219 BaseBdev2 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 BaseBdev3_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-11-10 15:23:08.430745] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:02.219 [2024-11-10 15:23:08.430798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.219 [2024-11-10 15:23:08.430819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:02.219 [2024-11-10 15:23:08.430830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.219 [2024-11-10 15:23:08.433260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.219 [2024-11-10 15:23:08.433296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:02.219 BaseBdev3 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 BaseBdev4_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-11-10 15:23:08.479057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:02.219 [2024-11-10 15:23:08.479133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.219 [2024-11-10 15:23:08.479160] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:02.219 [2024-11-10 15:23:08.479179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.219 [2024-11-10 15:23:08.482886] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.219 [2024-11-10 15:23:08.482941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:02.219 BaseBdev4 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 spare_malloc 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 spare_delay 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-11-10 15:23:08.525719] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:02.219 [2024-11-10 15:23:08.525778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.219 [2024-11-10 15:23:08.525796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:02.219 [2024-11-10 15:23:08.525807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.219 [2024-11-10 15:23:08.528307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.219 [2024-11-10 15:23:08.528345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:02.219 spare 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 [2024-11-10 15:23:08.537805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:02.219 [2024-11-10 15:23:08.539892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:02.219 [2024-11-10 15:23:08.539961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:02.219 [2024-11-10 15:23:08.540022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:02.219 [2024-11-10 15:23:08.540195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:02.219 [2024-11-10 15:23:08.540219] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:02.219 [2024-11-10 15:23:08.540484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:02.219 [2024-11-10 15:23:08.540674] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:02.219 [2024-11-10 15:23:08.540690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:02.219 [2024-11-10 15:23:08.540809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.219 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.479 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.479 "name": "raid_bdev1", 00:14:02.480 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:02.480 "strip_size_kb": 0, 00:14:02.480 "state": "online", 00:14:02.480 "raid_level": "raid1", 00:14:02.480 "superblock": true, 00:14:02.480 "num_base_bdevs": 4, 00:14:02.480 "num_base_bdevs_discovered": 4, 00:14:02.480 "num_base_bdevs_operational": 4, 00:14:02.480 "base_bdevs_list": [ 00:14:02.480 { 00:14:02.480 "name": "BaseBdev1", 00:14:02.480 "uuid": "62d76d02-9211-54f5-b0b9-40972e8787b1", 00:14:02.480 "is_configured": true, 00:14:02.480 "data_offset": 2048, 00:14:02.480 "data_size": 63488 00:14:02.480 }, 00:14:02.480 { 00:14:02.480 "name": "BaseBdev2", 00:14:02.480 "uuid": "5220d18c-94bc-5787-baf6-d7177c369811", 00:14:02.480 "is_configured": true, 00:14:02.480 "data_offset": 2048, 00:14:02.480 "data_size": 63488 00:14:02.480 }, 00:14:02.480 { 00:14:02.480 "name": "BaseBdev3", 00:14:02.480 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:02.480 "is_configured": true, 00:14:02.480 "data_offset": 2048, 00:14:02.480 "data_size": 63488 00:14:02.480 }, 00:14:02.480 { 00:14:02.480 "name": "BaseBdev4", 00:14:02.480 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:02.480 "is_configured": true, 00:14:02.480 "data_offset": 2048, 00:14:02.480 "data_size": 63488 00:14:02.480 } 00:14:02.480 ] 00:14:02.480 }' 00:14:02.480 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.480 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.740 [2024-11-10 15:23:08.910165] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.740 15:23:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.740 [2024-11-10 15:23:09.009868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.740 "name": "raid_bdev1", 00:14:02.740 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:02.740 "strip_size_kb": 0, 00:14:02.740 "state": "online", 00:14:02.740 "raid_level": "raid1", 00:14:02.740 "superblock": true, 00:14:02.740 "num_base_bdevs": 4, 00:14:02.740 "num_base_bdevs_discovered": 3, 00:14:02.740 "num_base_bdevs_operational": 3, 00:14:02.740 "base_bdevs_list": [ 00:14:02.740 { 00:14:02.740 "name": null, 00:14:02.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.740 "is_configured": false, 00:14:02.740 "data_offset": 0, 00:14:02.740 "data_size": 63488 00:14:02.740 }, 00:14:02.740 { 00:14:02.740 "name": "BaseBdev2", 00:14:02.740 "uuid": "5220d18c-94bc-5787-baf6-d7177c369811", 00:14:02.740 "is_configured": true, 00:14:02.740 "data_offset": 2048, 00:14:02.740 "data_size": 63488 00:14:02.740 }, 00:14:02.740 { 00:14:02.740 "name": "BaseBdev3", 00:14:02.740 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:02.740 "is_configured": true, 00:14:02.740 "data_offset": 2048, 00:14:02.740 "data_size": 63488 00:14:02.740 }, 00:14:02.740 { 00:14:02.740 "name": "BaseBdev4", 00:14:02.740 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:02.740 "is_configured": true, 00:14:02.740 "data_offset": 2048, 00:14:02.740 "data_size": 63488 00:14:02.740 } 00:14:02.740 ] 00:14:02.740 }' 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.740 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.999 [2024-11-10 15:23:09.105340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:03.000 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:03.000 Zero copy mechanism will not be used. 00:14:03.000 Running I/O for 60 seconds... 00:14:03.259 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:03.259 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.259 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.259 [2024-11-10 15:23:09.487658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:03.259 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.259 15:23:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:03.259 [2024-11-10 15:23:09.531271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:14:03.259 [2024-11-10 15:23:09.533643] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:03.518 [2024-11-10 15:23:09.656319] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:03.518 [2024-11-10 15:23:09.658258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:03.779 [2024-11-10 15:23:09.885013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:03.779 [2024-11-10 15:23:09.886166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:04.349 168.00 IOPS, 504.00 MiB/s [2024-11-10T15:23:10.712Z] [2024-11-10 15:23:10.402223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.349 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.350 "name": "raid_bdev1", 00:14:04.350 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:04.350 "strip_size_kb": 0, 00:14:04.350 "state": "online", 00:14:04.350 "raid_level": "raid1", 00:14:04.350 "superblock": true, 00:14:04.350 "num_base_bdevs": 4, 00:14:04.350 "num_base_bdevs_discovered": 4, 00:14:04.350 "num_base_bdevs_operational": 4, 00:14:04.350 "process": { 00:14:04.350 "type": "rebuild", 00:14:04.350 "target": "spare", 00:14:04.350 "progress": { 00:14:04.350 "blocks": 10240, 00:14:04.350 "percent": 16 00:14:04.350 } 00:14:04.350 }, 00:14:04.350 "base_bdevs_list": [ 00:14:04.350 { 00:14:04.350 "name": "spare", 00:14:04.350 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:04.350 "is_configured": true, 00:14:04.350 "data_offset": 2048, 00:14:04.350 "data_size": 63488 00:14:04.350 }, 00:14:04.350 { 00:14:04.350 "name": "BaseBdev2", 00:14:04.350 "uuid": "5220d18c-94bc-5787-baf6-d7177c369811", 00:14:04.350 "is_configured": true, 00:14:04.350 "data_offset": 2048, 00:14:04.350 "data_size": 63488 00:14:04.350 }, 00:14:04.350 { 00:14:04.350 "name": "BaseBdev3", 00:14:04.350 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:04.350 "is_configured": true, 00:14:04.350 "data_offset": 2048, 00:14:04.350 "data_size": 63488 00:14:04.350 }, 00:14:04.350 { 00:14:04.350 "name": "BaseBdev4", 00:14:04.350 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:04.350 "is_configured": true, 00:14:04.350 "data_offset": 2048, 00:14:04.350 "data_size": 63488 00:14:04.350 } 00:14:04.350 ] 00:14:04.350 }' 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.350 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.350 [2024-11-10 15:23:10.645467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.610 [2024-11-10 15:23:10.754488] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:04.610 [2024-11-10 15:23:10.771446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.610 [2024-11-10 15:23:10.771505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:04.610 [2024-11-10 15:23:10.771521] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:04.610 [2024-11-10 15:23:10.786889] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006630 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.610 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.610 "name": "raid_bdev1", 00:14:04.610 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:04.610 "strip_size_kb": 0, 00:14:04.610 "state": "online", 00:14:04.610 "raid_level": "raid1", 00:14:04.610 "superblock": true, 00:14:04.610 "num_base_bdevs": 4, 00:14:04.610 "num_base_bdevs_discovered": 3, 00:14:04.610 "num_base_bdevs_operational": 3, 00:14:04.610 "base_bdevs_list": [ 00:14:04.610 { 00:14:04.610 "name": null, 00:14:04.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.610 "is_configured": false, 00:14:04.610 "data_offset": 0, 00:14:04.610 "data_size": 63488 00:14:04.610 }, 00:14:04.610 { 00:14:04.610 "name": "BaseBdev2", 00:14:04.610 "uuid": "5220d18c-94bc-5787-baf6-d7177c369811", 00:14:04.610 "is_configured": true, 00:14:04.610 "data_offset": 2048, 00:14:04.610 "data_size": 63488 00:14:04.610 }, 00:14:04.610 { 00:14:04.610 "name": "BaseBdev3", 00:14:04.610 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:04.611 "is_configured": true, 00:14:04.611 "data_offset": 2048, 00:14:04.611 "data_size": 63488 00:14:04.611 }, 00:14:04.611 { 00:14:04.611 "name": "BaseBdev4", 00:14:04.611 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:04.611 "is_configured": true, 00:14:04.611 "data_offset": 2048, 00:14:04.611 "data_size": 63488 00:14:04.611 } 00:14:04.611 ] 00:14:04.611 }' 00:14:04.611 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.611 15:23:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.130 153.50 IOPS, 460.50 MiB/s [2024-11-10T15:23:11.493Z] 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.130 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.130 "name": "raid_bdev1", 00:14:05.130 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:05.130 "strip_size_kb": 0, 00:14:05.130 "state": "online", 00:14:05.130 "raid_level": "raid1", 00:14:05.130 "superblock": true, 00:14:05.130 "num_base_bdevs": 4, 00:14:05.130 "num_base_bdevs_discovered": 3, 00:14:05.130 "num_base_bdevs_operational": 3, 00:14:05.130 "base_bdevs_list": [ 00:14:05.130 { 00:14:05.130 "name": null, 00:14:05.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.130 "is_configured": false, 00:14:05.130 "data_offset": 0, 00:14:05.130 "data_size": 63488 00:14:05.130 }, 00:14:05.130 { 00:14:05.130 "name": "BaseBdev2", 00:14:05.131 "uuid": "5220d18c-94bc-5787-baf6-d7177c369811", 00:14:05.131 "is_configured": true, 00:14:05.131 "data_offset": 2048, 00:14:05.131 "data_size": 63488 00:14:05.131 }, 00:14:05.131 { 00:14:05.131 "name": "BaseBdev3", 00:14:05.131 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:05.131 "is_configured": true, 00:14:05.131 "data_offset": 2048, 00:14:05.131 "data_size": 63488 00:14:05.131 }, 00:14:05.131 { 00:14:05.131 "name": "BaseBdev4", 00:14:05.131 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:05.131 "is_configured": true, 00:14:05.131 "data_offset": 2048, 00:14:05.131 "data_size": 63488 00:14:05.131 } 00:14:05.131 ] 00:14:05.131 }' 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.131 [2024-11-10 15:23:11.391265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.131 15:23:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:05.131 [2024-11-10 15:23:11.450321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:14:05.131 [2024-11-10 15:23:11.452732] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:05.391 [2024-11-10 15:23:11.571512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.391 [2024-11-10 15:23:11.571934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:05.650 [2024-11-10 15:23:11.793448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.650 [2024-11-10 15:23:11.794512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:05.911 134.67 IOPS, 404.00 MiB/s [2024-11-10T15:23:12.274Z] [2024-11-10 15:23:12.159023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:05.911 [2024-11-10 15:23:12.159539] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.170 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.171 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.171 "name": "raid_bdev1", 00:14:06.171 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:06.171 "strip_size_kb": 0, 00:14:06.171 "state": "online", 00:14:06.171 "raid_level": "raid1", 00:14:06.171 "superblock": true, 00:14:06.171 "num_base_bdevs": 4, 00:14:06.171 "num_base_bdevs_discovered": 4, 00:14:06.171 "num_base_bdevs_operational": 4, 00:14:06.171 "process": { 00:14:06.171 "type": "rebuild", 00:14:06.171 "target": "spare", 00:14:06.171 "progress": { 00:14:06.171 "blocks": 12288, 00:14:06.171 "percent": 19 00:14:06.171 } 00:14:06.171 }, 00:14:06.171 "base_bdevs_list": [ 00:14:06.171 { 00:14:06.171 "name": "spare", 00:14:06.171 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:06.171 "is_configured": true, 00:14:06.171 "data_offset": 2048, 00:14:06.171 "data_size": 63488 00:14:06.171 }, 00:14:06.171 { 00:14:06.171 "name": "BaseBdev2", 00:14:06.171 "uuid": "5220d18c-94bc-5787-baf6-d7177c369811", 00:14:06.171 "is_configured": true, 00:14:06.171 "data_offset": 2048, 00:14:06.171 "data_size": 63488 00:14:06.171 }, 00:14:06.171 { 00:14:06.171 "name": "BaseBdev3", 00:14:06.171 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:06.171 "is_configured": true, 00:14:06.171 "data_offset": 2048, 00:14:06.171 "data_size": 63488 00:14:06.171 }, 00:14:06.171 { 00:14:06.171 "name": "BaseBdev4", 00:14:06.171 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:06.171 "is_configured": true, 00:14:06.171 "data_offset": 2048, 00:14:06.171 "data_size": 63488 00:14:06.171 } 00:14:06.171 ] 00:14:06.171 }' 00:14:06.171 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.171 [2024-11-10 15:23:12.487072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:06.171 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:06.431 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.431 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.431 [2024-11-10 15:23:12.588834] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:06.691 [2024-11-10 15:23:12.864783] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006630 00:14:06.691 [2024-11-10 15:23:12.864837] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000067d0 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 [2024-11-10 15:23:12.887784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.691 "name": "raid_bdev1", 00:14:06.691 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:06.691 "strip_size_kb": 0, 00:14:06.691 "state": "online", 00:14:06.691 "raid_level": "raid1", 00:14:06.691 "superblock": true, 00:14:06.691 "num_base_bdevs": 4, 00:14:06.691 "num_base_bdevs_discovered": 3, 00:14:06.691 "num_base_bdevs_operational": 3, 00:14:06.691 "process": { 00:14:06.691 "type": "rebuild", 00:14:06.691 "target": "spare", 00:14:06.691 "progress": { 00:14:06.691 "blocks": 16384, 00:14:06.691 "percent": 25 00:14:06.691 } 00:14:06.691 }, 00:14:06.691 "base_bdevs_list": [ 00:14:06.691 { 00:14:06.691 "name": "spare", 00:14:06.691 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:06.691 "is_configured": true, 00:14:06.691 "data_offset": 2048, 00:14:06.691 "data_size": 63488 00:14:06.691 }, 00:14:06.691 { 00:14:06.691 "name": null, 00:14:06.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.691 "is_configured": false, 00:14:06.691 "data_offset": 0, 00:14:06.691 "data_size": 63488 00:14:06.691 }, 00:14:06.691 { 00:14:06.691 "name": "BaseBdev3", 00:14:06.691 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:06.691 "is_configured": true, 00:14:06.691 "data_offset": 2048, 00:14:06.691 "data_size": 63488 00:14:06.691 }, 00:14:06.691 { 00:14:06.691 "name": "BaseBdev4", 00:14:06.691 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:06.691 "is_configured": true, 00:14:06.691 "data_offset": 2048, 00:14:06.691 "data_size": 63488 00:14:06.691 } 00:14:06.691 ] 00:14:06.691 }' 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.691 15:23:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=407 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.691 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.951 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.951 "name": "raid_bdev1", 00:14:06.951 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:06.951 "strip_size_kb": 0, 00:14:06.951 "state": "online", 00:14:06.951 "raid_level": "raid1", 00:14:06.951 "superblock": true, 00:14:06.951 "num_base_bdevs": 4, 00:14:06.951 "num_base_bdevs_discovered": 3, 00:14:06.951 "num_base_bdevs_operational": 3, 00:14:06.951 "process": { 00:14:06.951 "type": "rebuild", 00:14:06.951 "target": "spare", 00:14:06.951 "progress": { 00:14:06.951 "blocks": 16384, 00:14:06.951 "percent": 25 00:14:06.951 } 00:14:06.951 }, 00:14:06.951 "base_bdevs_list": [ 00:14:06.951 { 00:14:06.951 "name": "spare", 00:14:06.951 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:06.951 "is_configured": true, 00:14:06.951 "data_offset": 2048, 00:14:06.951 "data_size": 63488 00:14:06.951 }, 00:14:06.951 { 00:14:06.951 "name": null, 00:14:06.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.951 "is_configured": false, 00:14:06.951 "data_offset": 0, 00:14:06.951 "data_size": 63488 00:14:06.951 }, 00:14:06.951 { 00:14:06.951 "name": "BaseBdev3", 00:14:06.951 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:06.951 "is_configured": true, 00:14:06.951 "data_offset": 2048, 00:14:06.951 "data_size": 63488 00:14:06.951 }, 00:14:06.951 { 00:14:06.951 "name": "BaseBdev4", 00:14:06.951 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:06.951 "is_configured": true, 00:14:06.951 "data_offset": 2048, 00:14:06.951 "data_size": 63488 00:14:06.951 } 00:14:06.951 ] 00:14:06.951 }' 00:14:06.951 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.951 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:06.951 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.951 116.50 IOPS, 349.50 MiB/s [2024-11-10T15:23:13.314Z] 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.951 15:23:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:07.211 [2024-11-10 15:23:13.368158] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:07.471 [2024-11-10 15:23:13.692455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:07.471 [2024-11-10 15:23:13.693791] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:07.730 [2024-11-10 15:23:13.911480] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:07.991 102.60 IOPS, 307.80 MiB/s [2024-11-10T15:23:14.354Z] 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.991 "name": "raid_bdev1", 00:14:07.991 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:07.991 "strip_size_kb": 0, 00:14:07.991 "state": "online", 00:14:07.991 "raid_level": "raid1", 00:14:07.991 "superblock": true, 00:14:07.991 "num_base_bdevs": 4, 00:14:07.991 "num_base_bdevs_discovered": 3, 00:14:07.991 "num_base_bdevs_operational": 3, 00:14:07.991 "process": { 00:14:07.991 "type": "rebuild", 00:14:07.991 "target": "spare", 00:14:07.991 "progress": { 00:14:07.991 "blocks": 32768, 00:14:07.991 "percent": 51 00:14:07.991 } 00:14:07.991 }, 00:14:07.991 "base_bdevs_list": [ 00:14:07.991 { 00:14:07.991 "name": "spare", 00:14:07.991 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:07.991 "is_configured": true, 00:14:07.991 "data_offset": 2048, 00:14:07.991 "data_size": 63488 00:14:07.991 }, 00:14:07.991 { 00:14:07.991 "name": null, 00:14:07.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.991 "is_configured": false, 00:14:07.991 "data_offset": 0, 00:14:07.991 "data_size": 63488 00:14:07.991 }, 00:14:07.991 { 00:14:07.991 "name": "BaseBdev3", 00:14:07.991 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:07.991 "is_configured": true, 00:14:07.991 "data_offset": 2048, 00:14:07.991 "data_size": 63488 00:14:07.991 }, 00:14:07.991 { 00:14:07.991 "name": "BaseBdev4", 00:14:07.991 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:07.991 "is_configured": true, 00:14:07.991 "data_offset": 2048, 00:14:07.991 "data_size": 63488 00:14:07.991 } 00:14:07.991 ] 00:14:07.991 }' 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.991 [2024-11-10 15:23:14.244719] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.991 15:23:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.930 [2024-11-10 15:23:14.923920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:08.930 94.00 IOPS, 282.00 MiB/s [2024-11-10T15:23:15.293Z] 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.930 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.930 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.930 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.930 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.930 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.191 "name": "raid_bdev1", 00:14:09.191 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:09.191 "strip_size_kb": 0, 00:14:09.191 "state": "online", 00:14:09.191 "raid_level": "raid1", 00:14:09.191 "superblock": true, 00:14:09.191 "num_base_bdevs": 4, 00:14:09.191 "num_base_bdevs_discovered": 3, 00:14:09.191 "num_base_bdevs_operational": 3, 00:14:09.191 "process": { 00:14:09.191 "type": "rebuild", 00:14:09.191 "target": "spare", 00:14:09.191 "progress": { 00:14:09.191 "blocks": 53248, 00:14:09.191 "percent": 83 00:14:09.191 } 00:14:09.191 }, 00:14:09.191 "base_bdevs_list": [ 00:14:09.191 { 00:14:09.191 "name": "spare", 00:14:09.191 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:09.191 "is_configured": true, 00:14:09.191 "data_offset": 2048, 00:14:09.191 "data_size": 63488 00:14:09.191 }, 00:14:09.191 { 00:14:09.191 "name": null, 00:14:09.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.191 "is_configured": false, 00:14:09.191 "data_offset": 0, 00:14:09.191 "data_size": 63488 00:14:09.191 }, 00:14:09.191 { 00:14:09.191 "name": "BaseBdev3", 00:14:09.191 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:09.191 "is_configured": true, 00:14:09.191 "data_offset": 2048, 00:14:09.191 "data_size": 63488 00:14:09.191 }, 00:14:09.191 { 00:14:09.191 "name": "BaseBdev4", 00:14:09.191 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:09.191 "is_configured": true, 00:14:09.191 "data_offset": 2048, 00:14:09.191 "data_size": 63488 00:14:09.191 } 00:14:09.191 ] 00:14:09.191 }' 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.191 15:23:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.760 [2024-11-10 15:23:15.817175] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.761 [2024-11-10 15:23:15.922505] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.761 [2024-11-10 15:23:15.925876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.332 85.43 IOPS, 256.29 MiB/s [2024-11-10T15:23:16.695Z] 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.332 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.332 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.333 "name": "raid_bdev1", 00:14:10.333 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:10.333 "strip_size_kb": 0, 00:14:10.333 "state": "online", 00:14:10.333 "raid_level": "raid1", 00:14:10.333 "superblock": true, 00:14:10.333 "num_base_bdevs": 4, 00:14:10.333 "num_base_bdevs_discovered": 3, 00:14:10.333 "num_base_bdevs_operational": 3, 00:14:10.333 "base_bdevs_list": [ 00:14:10.333 { 00:14:10.333 "name": "spare", 00:14:10.333 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:10.333 "is_configured": true, 00:14:10.333 "data_offset": 2048, 00:14:10.333 "data_size": 63488 00:14:10.333 }, 00:14:10.333 { 00:14:10.333 "name": null, 00:14:10.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.333 "is_configured": false, 00:14:10.333 "data_offset": 0, 00:14:10.333 "data_size": 63488 00:14:10.333 }, 00:14:10.333 { 00:14:10.333 "name": "BaseBdev3", 00:14:10.333 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:10.333 "is_configured": true, 00:14:10.333 "data_offset": 2048, 00:14:10.333 "data_size": 63488 00:14:10.333 }, 00:14:10.333 { 00:14:10.333 "name": "BaseBdev4", 00:14:10.333 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:10.333 "is_configured": true, 00:14:10.333 "data_offset": 2048, 00:14:10.333 "data_size": 63488 00:14:10.333 } 00:14:10.333 ] 00:14:10.333 }' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.333 "name": "raid_bdev1", 00:14:10.333 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:10.333 "strip_size_kb": 0, 00:14:10.333 "state": "online", 00:14:10.333 "raid_level": "raid1", 00:14:10.333 "superblock": true, 00:14:10.333 "num_base_bdevs": 4, 00:14:10.333 "num_base_bdevs_discovered": 3, 00:14:10.333 "num_base_bdevs_operational": 3, 00:14:10.333 "base_bdevs_list": [ 00:14:10.333 { 00:14:10.333 "name": "spare", 00:14:10.333 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:10.333 "is_configured": true, 00:14:10.333 "data_offset": 2048, 00:14:10.333 "data_size": 63488 00:14:10.333 }, 00:14:10.333 { 00:14:10.333 "name": null, 00:14:10.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.333 "is_configured": false, 00:14:10.333 "data_offset": 0, 00:14:10.333 "data_size": 63488 00:14:10.333 }, 00:14:10.333 { 00:14:10.333 "name": "BaseBdev3", 00:14:10.333 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:10.333 "is_configured": true, 00:14:10.333 "data_offset": 2048, 00:14:10.333 "data_size": 63488 00:14:10.333 }, 00:14:10.333 { 00:14:10.333 "name": "BaseBdev4", 00:14:10.333 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:10.333 "is_configured": true, 00:14:10.333 "data_offset": 2048, 00:14:10.333 "data_size": 63488 00:14:10.333 } 00:14:10.333 ] 00:14:10.333 }' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.333 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.607 "name": "raid_bdev1", 00:14:10.607 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:10.607 "strip_size_kb": 0, 00:14:10.607 "state": "online", 00:14:10.607 "raid_level": "raid1", 00:14:10.607 "superblock": true, 00:14:10.607 "num_base_bdevs": 4, 00:14:10.607 "num_base_bdevs_discovered": 3, 00:14:10.607 "num_base_bdevs_operational": 3, 00:14:10.607 "base_bdevs_list": [ 00:14:10.607 { 00:14:10.607 "name": "spare", 00:14:10.607 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:10.607 "is_configured": true, 00:14:10.607 "data_offset": 2048, 00:14:10.607 "data_size": 63488 00:14:10.607 }, 00:14:10.607 { 00:14:10.607 "name": null, 00:14:10.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.607 "is_configured": false, 00:14:10.607 "data_offset": 0, 00:14:10.607 "data_size": 63488 00:14:10.607 }, 00:14:10.607 { 00:14:10.607 "name": "BaseBdev3", 00:14:10.607 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:10.607 "is_configured": true, 00:14:10.607 "data_offset": 2048, 00:14:10.607 "data_size": 63488 00:14:10.607 }, 00:14:10.607 { 00:14:10.607 "name": "BaseBdev4", 00:14:10.607 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:10.607 "is_configured": true, 00:14:10.607 "data_offset": 2048, 00:14:10.607 "data_size": 63488 00:14:10.607 } 00:14:10.607 ] 00:14:10.607 }' 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.607 15:23:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.882 [2024-11-10 15:23:17.122552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.882 [2024-11-10 15:23:17.122600] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.882 79.75 IOPS, 239.25 MiB/s 00:14:10.882 Latency(us) 00:14:10.882 [2024-11-10T15:23:17.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.882 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:10.882 raid_bdev1 : 8.11 78.88 236.63 0.00 0.00 17910.14 289.18 122469.44 00:14:10.882 [2024-11-10T15:23:17.245Z] =================================================================================================================== 00:14:10.882 [2024-11-10T15:23:17.245Z] Total : 78.88 236.63 0.00 0.00 17910.14 289.18 122469.44 00:14:10.882 [2024-11-10 15:23:17.226105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.882 [2024-11-10 15:23:17.226150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.882 [2024-11-10 15:23:17.226262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.882 [2024-11-10 15:23:17.226275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:10.882 { 00:14:10.882 "results": [ 00:14:10.882 { 00:14:10.882 "job": "raid_bdev1", 00:14:10.882 "core_mask": "0x1", 00:14:10.882 "workload": "randrw", 00:14:10.882 "percentage": 50, 00:14:10.882 "status": "finished", 00:14:10.882 "queue_depth": 2, 00:14:10.882 "io_size": 3145728, 00:14:10.882 "runtime": 8.114052, 00:14:10.882 "iops": 78.87551127352893, 00:14:10.882 "mibps": 236.6265338205868, 00:14:10.882 "io_failed": 0, 00:14:10.882 "io_timeout": 0, 00:14:10.882 "avg_latency_us": 17910.140329469188, 00:14:10.882 "min_latency_us": 289.1798134751155, 00:14:10.882 "max_latency_us": 122469.43606728842 00:14:10.882 } 00:14:10.882 ], 00:14:10.882 "core_count": 1 00:14:10.882 } 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.882 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.142 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:11.142 /dev/nbd0 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.402 1+0 records in 00:14:11.402 1+0 records out 00:14:11.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385525 s, 10.6 MB/s 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:11.402 /dev/nbd1 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:11.402 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.662 1+0 records in 00:14:11.662 1+0 records out 00:14:11.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418706 s, 9.8 MB/s 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.662 15:23:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:11.922 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:11.922 /dev/nbd1 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # local i 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # break 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:12.182 1+0 records in 00:14:12.182 1+0 records out 00:14:12.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00043448 s, 9.4 MB/s 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # size=4096 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # return 0 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.182 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:12.183 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.183 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:12.183 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.183 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.442 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.443 [2024-11-10 15:23:18.795712] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:12.443 [2024-11-10 15:23:18.795769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.443 [2024-11-10 15:23:18.795794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:12.443 [2024-11-10 15:23:18.795804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.443 [2024-11-10 15:23:18.798360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.443 [2024-11-10 15:23:18.798393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:12.443 [2024-11-10 15:23:18.798479] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:12.443 [2024-11-10 15:23:18.798532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.443 [2024-11-10 15:23:18.798665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.443 [2024-11-10 15:23:18.798790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:12.443 spare 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.443 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.702 [2024-11-10 15:23:18.898864] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:12.702 [2024-11-10 15:23:18.898894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:12.702 [2024-11-10 15:23:18.899217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037570 00:14:12.702 [2024-11-10 15:23:18.899380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:12.702 [2024-11-10 15:23:18.899402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:12.702 [2024-11-10 15:23:18.899537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.702 "name": "raid_bdev1", 00:14:12.702 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:12.702 "strip_size_kb": 0, 00:14:12.702 "state": "online", 00:14:12.702 "raid_level": "raid1", 00:14:12.702 "superblock": true, 00:14:12.702 "num_base_bdevs": 4, 00:14:12.702 "num_base_bdevs_discovered": 3, 00:14:12.702 "num_base_bdevs_operational": 3, 00:14:12.702 "base_bdevs_list": [ 00:14:12.702 { 00:14:12.702 "name": "spare", 00:14:12.702 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:12.702 "is_configured": true, 00:14:12.702 "data_offset": 2048, 00:14:12.702 "data_size": 63488 00:14:12.702 }, 00:14:12.702 { 00:14:12.702 "name": null, 00:14:12.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.702 "is_configured": false, 00:14:12.702 "data_offset": 2048, 00:14:12.702 "data_size": 63488 00:14:12.702 }, 00:14:12.702 { 00:14:12.702 "name": "BaseBdev3", 00:14:12.702 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:12.702 "is_configured": true, 00:14:12.702 "data_offset": 2048, 00:14:12.702 "data_size": 63488 00:14:12.702 }, 00:14:12.702 { 00:14:12.702 "name": "BaseBdev4", 00:14:12.702 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:12.702 "is_configured": true, 00:14:12.702 "data_offset": 2048, 00:14:12.702 "data_size": 63488 00:14:12.702 } 00:14:12.702 ] 00:14:12.702 }' 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.702 15:23:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.272 "name": "raid_bdev1", 00:14:13.272 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:13.272 "strip_size_kb": 0, 00:14:13.272 "state": "online", 00:14:13.272 "raid_level": "raid1", 00:14:13.272 "superblock": true, 00:14:13.272 "num_base_bdevs": 4, 00:14:13.272 "num_base_bdevs_discovered": 3, 00:14:13.272 "num_base_bdevs_operational": 3, 00:14:13.272 "base_bdevs_list": [ 00:14:13.272 { 00:14:13.272 "name": "spare", 00:14:13.272 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:13.272 "is_configured": true, 00:14:13.272 "data_offset": 2048, 00:14:13.272 "data_size": 63488 00:14:13.272 }, 00:14:13.272 { 00:14:13.272 "name": null, 00:14:13.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.272 "is_configured": false, 00:14:13.272 "data_offset": 2048, 00:14:13.272 "data_size": 63488 00:14:13.272 }, 00:14:13.272 { 00:14:13.272 "name": "BaseBdev3", 00:14:13.272 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:13.272 "is_configured": true, 00:14:13.272 "data_offset": 2048, 00:14:13.272 "data_size": 63488 00:14:13.272 }, 00:14:13.272 { 00:14:13.272 "name": "BaseBdev4", 00:14:13.272 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:13.272 "is_configured": true, 00:14:13.272 "data_offset": 2048, 00:14:13.272 "data_size": 63488 00:14:13.272 } 00:14:13.272 ] 00:14:13.272 }' 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.272 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.273 [2024-11-10 15:23:19.576048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.273 "name": "raid_bdev1", 00:14:13.273 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:13.273 "strip_size_kb": 0, 00:14:13.273 "state": "online", 00:14:13.273 "raid_level": "raid1", 00:14:13.273 "superblock": true, 00:14:13.273 "num_base_bdevs": 4, 00:14:13.273 "num_base_bdevs_discovered": 2, 00:14:13.273 "num_base_bdevs_operational": 2, 00:14:13.273 "base_bdevs_list": [ 00:14:13.273 { 00:14:13.273 "name": null, 00:14:13.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.273 "is_configured": false, 00:14:13.273 "data_offset": 0, 00:14:13.273 "data_size": 63488 00:14:13.273 }, 00:14:13.273 { 00:14:13.273 "name": null, 00:14:13.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.273 "is_configured": false, 00:14:13.273 "data_offset": 2048, 00:14:13.273 "data_size": 63488 00:14:13.273 }, 00:14:13.273 { 00:14:13.273 "name": "BaseBdev3", 00:14:13.273 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:13.273 "is_configured": true, 00:14:13.273 "data_offset": 2048, 00:14:13.273 "data_size": 63488 00:14:13.273 }, 00:14:13.273 { 00:14:13.273 "name": "BaseBdev4", 00:14:13.273 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:13.273 "is_configured": true, 00:14:13.273 "data_offset": 2048, 00:14:13.273 "data_size": 63488 00:14:13.273 } 00:14:13.273 ] 00:14:13.273 }' 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.273 15:23:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.841 15:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.841 15:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.841 15:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.841 [2024-11-10 15:23:20.012261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.841 [2024-11-10 15:23:20.012433] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:13.841 [2024-11-10 15:23:20.012452] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:13.841 [2024-11-10 15:23:20.012486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.841 [2024-11-10 15:23:20.020270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037640 00:14:13.841 15:23:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.841 15:23:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:13.841 [2024-11-10 15:23:20.022551] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.780 "name": "raid_bdev1", 00:14:14.780 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:14.780 "strip_size_kb": 0, 00:14:14.780 "state": "online", 00:14:14.780 "raid_level": "raid1", 00:14:14.780 "superblock": true, 00:14:14.780 "num_base_bdevs": 4, 00:14:14.780 "num_base_bdevs_discovered": 3, 00:14:14.780 "num_base_bdevs_operational": 3, 00:14:14.780 "process": { 00:14:14.780 "type": "rebuild", 00:14:14.780 "target": "spare", 00:14:14.780 "progress": { 00:14:14.780 "blocks": 20480, 00:14:14.780 "percent": 32 00:14:14.780 } 00:14:14.780 }, 00:14:14.780 "base_bdevs_list": [ 00:14:14.780 { 00:14:14.780 "name": "spare", 00:14:14.780 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:14.780 "is_configured": true, 00:14:14.780 "data_offset": 2048, 00:14:14.780 "data_size": 63488 00:14:14.780 }, 00:14:14.780 { 00:14:14.780 "name": null, 00:14:14.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.780 "is_configured": false, 00:14:14.780 "data_offset": 2048, 00:14:14.780 "data_size": 63488 00:14:14.780 }, 00:14:14.780 { 00:14:14.780 "name": "BaseBdev3", 00:14:14.780 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:14.780 "is_configured": true, 00:14:14.780 "data_offset": 2048, 00:14:14.780 "data_size": 63488 00:14:14.780 }, 00:14:14.780 { 00:14:14.780 "name": "BaseBdev4", 00:14:14.780 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:14.780 "is_configured": true, 00:14:14.780 "data_offset": 2048, 00:14:14.780 "data_size": 63488 00:14:14.780 } 00:14:14.780 ] 00:14:14.780 }' 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.780 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.040 [2024-11-10 15:23:21.186065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.040 [2024-11-10 15:23:21.232126] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.040 [2024-11-10 15:23:21.232185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.040 [2024-11-10 15:23:21.232204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.040 [2024-11-10 15:23:21.232212] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.040 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.041 "name": "raid_bdev1", 00:14:15.041 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:15.041 "strip_size_kb": 0, 00:14:15.041 "state": "online", 00:14:15.041 "raid_level": "raid1", 00:14:15.041 "superblock": true, 00:14:15.041 "num_base_bdevs": 4, 00:14:15.041 "num_base_bdevs_discovered": 2, 00:14:15.041 "num_base_bdevs_operational": 2, 00:14:15.041 "base_bdevs_list": [ 00:14:15.041 { 00:14:15.041 "name": null, 00:14:15.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.041 "is_configured": false, 00:14:15.041 "data_offset": 0, 00:14:15.041 "data_size": 63488 00:14:15.041 }, 00:14:15.041 { 00:14:15.041 "name": null, 00:14:15.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.041 "is_configured": false, 00:14:15.041 "data_offset": 2048, 00:14:15.041 "data_size": 63488 00:14:15.041 }, 00:14:15.041 { 00:14:15.041 "name": "BaseBdev3", 00:14:15.041 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:15.041 "is_configured": true, 00:14:15.041 "data_offset": 2048, 00:14:15.041 "data_size": 63488 00:14:15.041 }, 00:14:15.041 { 00:14:15.041 "name": "BaseBdev4", 00:14:15.041 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:15.041 "is_configured": true, 00:14:15.041 "data_offset": 2048, 00:14:15.041 "data_size": 63488 00:14:15.041 } 00:14:15.041 ] 00:14:15.041 }' 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.041 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.611 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:15.611 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.611 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.611 [2024-11-10 15:23:21.703594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:15.611 [2024-11-10 15:23:21.703659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:15.611 [2024-11-10 15:23:21.703689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:15.611 [2024-11-10 15:23:21.703701] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:15.611 [2024-11-10 15:23:21.704246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:15.611 [2024-11-10 15:23:21.704273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:15.611 [2024-11-10 15:23:21.704375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:15.611 [2024-11-10 15:23:21.704393] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:15.611 [2024-11-10 15:23:21.704407] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:15.611 [2024-11-10 15:23:21.704429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.611 [2024-11-10 15:23:21.712291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037710 00:14:15.611 spare 00:14:15.611 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.611 15:23:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:15.611 [2024-11-10 15:23:21.714575] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.551 "name": "raid_bdev1", 00:14:16.551 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:16.551 "strip_size_kb": 0, 00:14:16.551 "state": "online", 00:14:16.551 "raid_level": "raid1", 00:14:16.551 "superblock": true, 00:14:16.551 "num_base_bdevs": 4, 00:14:16.551 "num_base_bdevs_discovered": 3, 00:14:16.551 "num_base_bdevs_operational": 3, 00:14:16.551 "process": { 00:14:16.551 "type": "rebuild", 00:14:16.551 "target": "spare", 00:14:16.551 "progress": { 00:14:16.551 "blocks": 20480, 00:14:16.551 "percent": 32 00:14:16.551 } 00:14:16.551 }, 00:14:16.551 "base_bdevs_list": [ 00:14:16.551 { 00:14:16.551 "name": "spare", 00:14:16.551 "uuid": "8faa6caa-22ee-5d2b-bff5-819e66f7d74b", 00:14:16.551 "is_configured": true, 00:14:16.551 "data_offset": 2048, 00:14:16.551 "data_size": 63488 00:14:16.551 }, 00:14:16.551 { 00:14:16.551 "name": null, 00:14:16.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.551 "is_configured": false, 00:14:16.551 "data_offset": 2048, 00:14:16.551 "data_size": 63488 00:14:16.551 }, 00:14:16.551 { 00:14:16.551 "name": "BaseBdev3", 00:14:16.551 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:16.551 "is_configured": true, 00:14:16.551 "data_offset": 2048, 00:14:16.551 "data_size": 63488 00:14:16.551 }, 00:14:16.551 { 00:14:16.551 "name": "BaseBdev4", 00:14:16.551 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:16.551 "is_configured": true, 00:14:16.551 "data_offset": 2048, 00:14:16.551 "data_size": 63488 00:14:16.551 } 00:14:16.551 ] 00:14:16.551 }' 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.551 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.551 [2024-11-10 15:23:22.857126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.811 [2024-11-10 15:23:22.924704] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:16.811 [2024-11-10 15:23:22.924797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:16.811 [2024-11-10 15:23:22.924815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:16.811 [2024-11-10 15:23:22.924829] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.811 "name": "raid_bdev1", 00:14:16.811 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:16.811 "strip_size_kb": 0, 00:14:16.811 "state": "online", 00:14:16.811 "raid_level": "raid1", 00:14:16.811 "superblock": true, 00:14:16.811 "num_base_bdevs": 4, 00:14:16.811 "num_base_bdevs_discovered": 2, 00:14:16.811 "num_base_bdevs_operational": 2, 00:14:16.811 "base_bdevs_list": [ 00:14:16.811 { 00:14:16.811 "name": null, 00:14:16.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.811 "is_configured": false, 00:14:16.811 "data_offset": 0, 00:14:16.811 "data_size": 63488 00:14:16.811 }, 00:14:16.811 { 00:14:16.811 "name": null, 00:14:16.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.811 "is_configured": false, 00:14:16.811 "data_offset": 2048, 00:14:16.811 "data_size": 63488 00:14:16.811 }, 00:14:16.811 { 00:14:16.811 "name": "BaseBdev3", 00:14:16.811 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:16.811 "is_configured": true, 00:14:16.811 "data_offset": 2048, 00:14:16.811 "data_size": 63488 00:14:16.811 }, 00:14:16.811 { 00:14:16.811 "name": "BaseBdev4", 00:14:16.811 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:16.811 "is_configured": true, 00:14:16.811 "data_offset": 2048, 00:14:16.811 "data_size": 63488 00:14:16.811 } 00:14:16.811 ] 00:14:16.811 }' 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.811 15:23:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.071 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.071 "name": "raid_bdev1", 00:14:17.071 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:17.071 "strip_size_kb": 0, 00:14:17.071 "state": "online", 00:14:17.071 "raid_level": "raid1", 00:14:17.071 "superblock": true, 00:14:17.071 "num_base_bdevs": 4, 00:14:17.071 "num_base_bdevs_discovered": 2, 00:14:17.071 "num_base_bdevs_operational": 2, 00:14:17.071 "base_bdevs_list": [ 00:14:17.071 { 00:14:17.071 "name": null, 00:14:17.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.071 "is_configured": false, 00:14:17.071 "data_offset": 0, 00:14:17.071 "data_size": 63488 00:14:17.071 }, 00:14:17.071 { 00:14:17.071 "name": null, 00:14:17.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.071 "is_configured": false, 00:14:17.071 "data_offset": 2048, 00:14:17.071 "data_size": 63488 00:14:17.071 }, 00:14:17.071 { 00:14:17.071 "name": "BaseBdev3", 00:14:17.071 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:17.071 "is_configured": true, 00:14:17.071 "data_offset": 2048, 00:14:17.071 "data_size": 63488 00:14:17.071 }, 00:14:17.071 { 00:14:17.071 "name": "BaseBdev4", 00:14:17.071 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:17.071 "is_configured": true, 00:14:17.071 "data_offset": 2048, 00:14:17.072 "data_size": 63488 00:14:17.072 } 00:14:17.072 ] 00:14:17.072 }' 00:14:17.072 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.331 [2024-11-10 15:23:23.504466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:17.331 [2024-11-10 15:23:23.504525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.331 [2024-11-10 15:23:23.504547] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:17.331 [2024-11-10 15:23:23.504558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.331 [2024-11-10 15:23:23.505169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.331 [2024-11-10 15:23:23.505198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.331 [2024-11-10 15:23:23.505281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:17.331 [2024-11-10 15:23:23.505313] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:17.331 [2024-11-10 15:23:23.505327] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:17.331 [2024-11-10 15:23:23.505348] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:17.331 BaseBdev1 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.331 15:23:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.270 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.270 "name": "raid_bdev1", 00:14:18.270 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:18.270 "strip_size_kb": 0, 00:14:18.270 "state": "online", 00:14:18.270 "raid_level": "raid1", 00:14:18.270 "superblock": true, 00:14:18.270 "num_base_bdevs": 4, 00:14:18.270 "num_base_bdevs_discovered": 2, 00:14:18.270 "num_base_bdevs_operational": 2, 00:14:18.270 "base_bdevs_list": [ 00:14:18.270 { 00:14:18.270 "name": null, 00:14:18.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.270 "is_configured": false, 00:14:18.270 "data_offset": 0, 00:14:18.270 "data_size": 63488 00:14:18.270 }, 00:14:18.270 { 00:14:18.270 "name": null, 00:14:18.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.270 "is_configured": false, 00:14:18.270 "data_offset": 2048, 00:14:18.271 "data_size": 63488 00:14:18.271 }, 00:14:18.271 { 00:14:18.271 "name": "BaseBdev3", 00:14:18.271 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:18.271 "is_configured": true, 00:14:18.271 "data_offset": 2048, 00:14:18.271 "data_size": 63488 00:14:18.271 }, 00:14:18.271 { 00:14:18.271 "name": "BaseBdev4", 00:14:18.271 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:18.271 "is_configured": true, 00:14:18.271 "data_offset": 2048, 00:14:18.271 "data_size": 63488 00:14:18.271 } 00:14:18.271 ] 00:14:18.271 }' 00:14:18.271 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.271 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.840 "name": "raid_bdev1", 00:14:18.840 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:18.840 "strip_size_kb": 0, 00:14:18.840 "state": "online", 00:14:18.840 "raid_level": "raid1", 00:14:18.840 "superblock": true, 00:14:18.840 "num_base_bdevs": 4, 00:14:18.840 "num_base_bdevs_discovered": 2, 00:14:18.840 "num_base_bdevs_operational": 2, 00:14:18.840 "base_bdevs_list": [ 00:14:18.840 { 00:14:18.840 "name": null, 00:14:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.840 "is_configured": false, 00:14:18.840 "data_offset": 0, 00:14:18.840 "data_size": 63488 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "name": null, 00:14:18.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.840 "is_configured": false, 00:14:18.840 "data_offset": 2048, 00:14:18.840 "data_size": 63488 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "name": "BaseBdev3", 00:14:18.840 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:18.840 "is_configured": true, 00:14:18.840 "data_offset": 2048, 00:14:18.840 "data_size": 63488 00:14:18.840 }, 00:14:18.840 { 00:14:18.840 "name": "BaseBdev4", 00:14:18.840 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:18.840 "is_configured": true, 00:14:18.840 "data_offset": 2048, 00:14:18.840 "data_size": 63488 00:14:18.840 } 00:14:18.840 ] 00:14:18.840 }' 00:14:18.840 15:23:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.840 [2024-11-10 15:23:25.077076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.840 [2024-11-10 15:23:25.077320] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:18.840 [2024-11-10 15:23:25.077384] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:18.840 request: 00:14:18.840 { 00:14:18.840 "base_bdev": "BaseBdev1", 00:14:18.840 "raid_bdev": "raid_bdev1", 00:14:18.840 "method": "bdev_raid_add_base_bdev", 00:14:18.840 "req_id": 1 00:14:18.840 } 00:14:18.840 Got JSON-RPC error response 00:14:18.840 response: 00:14:18.840 { 00:14:18.840 "code": -22, 00:14:18.840 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:18.840 } 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.840 15:23:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.779 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.039 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.039 "name": "raid_bdev1", 00:14:20.039 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:20.039 "strip_size_kb": 0, 00:14:20.039 "state": "online", 00:14:20.039 "raid_level": "raid1", 00:14:20.039 "superblock": true, 00:14:20.039 "num_base_bdevs": 4, 00:14:20.039 "num_base_bdevs_discovered": 2, 00:14:20.039 "num_base_bdevs_operational": 2, 00:14:20.039 "base_bdevs_list": [ 00:14:20.039 { 00:14:20.039 "name": null, 00:14:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.039 "is_configured": false, 00:14:20.039 "data_offset": 0, 00:14:20.039 "data_size": 63488 00:14:20.039 }, 00:14:20.039 { 00:14:20.039 "name": null, 00:14:20.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.039 "is_configured": false, 00:14:20.039 "data_offset": 2048, 00:14:20.039 "data_size": 63488 00:14:20.039 }, 00:14:20.039 { 00:14:20.039 "name": "BaseBdev3", 00:14:20.039 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:20.039 "is_configured": true, 00:14:20.039 "data_offset": 2048, 00:14:20.039 "data_size": 63488 00:14:20.039 }, 00:14:20.039 { 00:14:20.039 "name": "BaseBdev4", 00:14:20.039 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:20.039 "is_configured": true, 00:14:20.039 "data_offset": 2048, 00:14:20.039 "data_size": 63488 00:14:20.039 } 00:14:20.039 ] 00:14:20.039 }' 00:14:20.039 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.039 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.299 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.299 "name": "raid_bdev1", 00:14:20.299 "uuid": "69836b3a-6c44-4b6d-a92e-48328d55d7ca", 00:14:20.299 "strip_size_kb": 0, 00:14:20.299 "state": "online", 00:14:20.299 "raid_level": "raid1", 00:14:20.299 "superblock": true, 00:14:20.299 "num_base_bdevs": 4, 00:14:20.299 "num_base_bdevs_discovered": 2, 00:14:20.299 "num_base_bdevs_operational": 2, 00:14:20.299 "base_bdevs_list": [ 00:14:20.299 { 00:14:20.299 "name": null, 00:14:20.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.299 "is_configured": false, 00:14:20.299 "data_offset": 0, 00:14:20.299 "data_size": 63488 00:14:20.299 }, 00:14:20.299 { 00:14:20.299 "name": null, 00:14:20.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.299 "is_configured": false, 00:14:20.299 "data_offset": 2048, 00:14:20.299 "data_size": 63488 00:14:20.299 }, 00:14:20.299 { 00:14:20.299 "name": "BaseBdev3", 00:14:20.299 "uuid": "494ae814-7836-583d-bc8c-57961d4031dd", 00:14:20.299 "is_configured": true, 00:14:20.299 "data_offset": 2048, 00:14:20.299 "data_size": 63488 00:14:20.299 }, 00:14:20.299 { 00:14:20.299 "name": "BaseBdev4", 00:14:20.300 "uuid": "926d7c57-c0eb-5b48-9719-f3a0be1e2981", 00:14:20.300 "is_configured": true, 00:14:20.300 "data_offset": 2048, 00:14:20.300 "data_size": 63488 00:14:20.300 } 00:14:20.300 ] 00:14:20.300 }' 00:14:20.300 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.300 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.300 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 91133 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@952 -- # '[' -z 91133 ']' 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # kill -0 91133 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # uname 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91133 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.560 killing process with pid 91133 00:14:20.560 Received shutdown signal, test time was about 17.634367 seconds 00:14:20.560 00:14:20.560 Latency(us) 00:14:20.560 [2024-11-10T15:23:26.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.560 [2024-11-10T15:23:26.923Z] =================================================================================================================== 00:14:20.560 [2024-11-10T15:23:26.923Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91133' 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@971 -- # kill 91133 00:14:20.560 [2024-11-10 15:23:26.743551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:20.560 [2024-11-10 15:23:26.743704] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.560 15:23:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@976 -- # wait 91133 00:14:20.560 [2024-11-10 15:23:26.743788] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.560 [2024-11-10 15:23:26.743809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:20.560 [2024-11-10 15:23:26.828967] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.820 15:23:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:20.820 00:14:20.820 real 0m19.734s 00:14:20.820 user 0m26.045s 00:14:20.820 sys 0m2.658s 00:14:20.820 ************************************ 00:14:20.820 END TEST raid_rebuild_test_sb_io 00:14:20.820 ************************************ 00:14:20.820 15:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:20.820 15:23:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.079 15:23:27 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:21.079 15:23:27 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:21.079 15:23:27 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:21.079 15:23:27 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.079 15:23:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.079 ************************************ 00:14:21.079 START TEST raid5f_state_function_test 00:14:21.079 ************************************ 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 false 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:21.079 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=91837 00:14:21.080 Process raid pid: 91837 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 91837' 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 91837 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 91837 ']' 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.080 15:23:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.080 [2024-11-10 15:23:27.349256] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:14:21.080 [2024-11-10 15:23:27.349404] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.339 [2024-11-10 15:23:27.490120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:21.339 [2024-11-10 15:23:27.525846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.339 [2024-11-10 15:23:27.565895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.339 [2024-11-10 15:23:27.641776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.339 [2024-11-10 15:23:27.641816] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.932 [2024-11-10 15:23:28.174029] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.932 [2024-11-10 15:23:28.174079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.932 [2024-11-10 15:23:28.174094] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:21.932 [2024-11-10 15:23:28.174101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:21.932 [2024-11-10 15:23:28.174116] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:21.932 [2024-11-10 15:23:28.174122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.932 "name": "Existed_Raid", 00:14:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.932 "strip_size_kb": 64, 00:14:21.932 "state": "configuring", 00:14:21.932 "raid_level": "raid5f", 00:14:21.932 "superblock": false, 00:14:21.932 "num_base_bdevs": 3, 00:14:21.932 "num_base_bdevs_discovered": 0, 00:14:21.932 "num_base_bdevs_operational": 3, 00:14:21.932 "base_bdevs_list": [ 00:14:21.932 { 00:14:21.932 "name": "BaseBdev1", 00:14:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.932 "is_configured": false, 00:14:21.932 "data_offset": 0, 00:14:21.932 "data_size": 0 00:14:21.932 }, 00:14:21.932 { 00:14:21.932 "name": "BaseBdev2", 00:14:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.932 "is_configured": false, 00:14:21.932 "data_offset": 0, 00:14:21.932 "data_size": 0 00:14:21.932 }, 00:14:21.932 { 00:14:21.932 "name": "BaseBdev3", 00:14:21.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.932 "is_configured": false, 00:14:21.932 "data_offset": 0, 00:14:21.932 "data_size": 0 00:14:21.932 } 00:14:21.932 ] 00:14:21.932 }' 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.932 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.501 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.502 [2024-11-10 15:23:28.638100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:22.502 [2024-11-10 15:23:28.638187] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.502 [2024-11-10 15:23:28.650129] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.502 [2024-11-10 15:23:28.650220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.502 [2024-11-10 15:23:28.650250] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:22.502 [2024-11-10 15:23:28.650271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:22.502 [2024-11-10 15:23:28.650291] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:22.502 [2024-11-10 15:23:28.650313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.502 [2024-11-10 15:23:28.677214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.502 BaseBdev1 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.502 [ 00:14:22.502 { 00:14:22.502 "name": "BaseBdev1", 00:14:22.502 "aliases": [ 00:14:22.502 "17d2ffd9-d461-4a51-ab29-ee760b7e836c" 00:14:22.502 ], 00:14:22.502 "product_name": "Malloc disk", 00:14:22.502 "block_size": 512, 00:14:22.502 "num_blocks": 65536, 00:14:22.502 "uuid": "17d2ffd9-d461-4a51-ab29-ee760b7e836c", 00:14:22.502 "assigned_rate_limits": { 00:14:22.502 "rw_ios_per_sec": 0, 00:14:22.502 "rw_mbytes_per_sec": 0, 00:14:22.502 "r_mbytes_per_sec": 0, 00:14:22.502 "w_mbytes_per_sec": 0 00:14:22.502 }, 00:14:22.502 "claimed": true, 00:14:22.502 "claim_type": "exclusive_write", 00:14:22.502 "zoned": false, 00:14:22.502 "supported_io_types": { 00:14:22.502 "read": true, 00:14:22.502 "write": true, 00:14:22.502 "unmap": true, 00:14:22.502 "flush": true, 00:14:22.502 "reset": true, 00:14:22.502 "nvme_admin": false, 00:14:22.502 "nvme_io": false, 00:14:22.502 "nvme_io_md": false, 00:14:22.502 "write_zeroes": true, 00:14:22.502 "zcopy": true, 00:14:22.502 "get_zone_info": false, 00:14:22.502 "zone_management": false, 00:14:22.502 "zone_append": false, 00:14:22.502 "compare": false, 00:14:22.502 "compare_and_write": false, 00:14:22.502 "abort": true, 00:14:22.502 "seek_hole": false, 00:14:22.502 "seek_data": false, 00:14:22.502 "copy": true, 00:14:22.502 "nvme_iov_md": false 00:14:22.502 }, 00:14:22.502 "memory_domains": [ 00:14:22.502 { 00:14:22.502 "dma_device_id": "system", 00:14:22.502 "dma_device_type": 1 00:14:22.502 }, 00:14:22.502 { 00:14:22.502 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.502 "dma_device_type": 2 00:14:22.502 } 00:14:22.502 ], 00:14:22.502 "driver_specific": {} 00:14:22.502 } 00:14:22.502 ] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.502 "name": "Existed_Raid", 00:14:22.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.502 "strip_size_kb": 64, 00:14:22.502 "state": "configuring", 00:14:22.502 "raid_level": "raid5f", 00:14:22.502 "superblock": false, 00:14:22.502 "num_base_bdevs": 3, 00:14:22.502 "num_base_bdevs_discovered": 1, 00:14:22.502 "num_base_bdevs_operational": 3, 00:14:22.502 "base_bdevs_list": [ 00:14:22.502 { 00:14:22.502 "name": "BaseBdev1", 00:14:22.502 "uuid": "17d2ffd9-d461-4a51-ab29-ee760b7e836c", 00:14:22.502 "is_configured": true, 00:14:22.502 "data_offset": 0, 00:14:22.502 "data_size": 65536 00:14:22.502 }, 00:14:22.502 { 00:14:22.502 "name": "BaseBdev2", 00:14:22.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.502 "is_configured": false, 00:14:22.502 "data_offset": 0, 00:14:22.502 "data_size": 0 00:14:22.502 }, 00:14:22.502 { 00:14:22.502 "name": "BaseBdev3", 00:14:22.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.502 "is_configured": false, 00:14:22.502 "data_offset": 0, 00:14:22.502 "data_size": 0 00:14:22.502 } 00:14:22.502 ] 00:14:22.502 }' 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.502 15:23:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 [2024-11-10 15:23:29.157385] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:23.072 [2024-11-10 15:23:29.157494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 [2024-11-10 15:23:29.169419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.072 [2024-11-10 15:23:29.171597] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:23.072 [2024-11-10 15:23:29.171636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:23.072 [2024-11-10 15:23:29.171650] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:23.072 [2024-11-10 15:23:29.171658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.072 "name": "Existed_Raid", 00:14:23.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.072 "strip_size_kb": 64, 00:14:23.072 "state": "configuring", 00:14:23.072 "raid_level": "raid5f", 00:14:23.072 "superblock": false, 00:14:23.072 "num_base_bdevs": 3, 00:14:23.072 "num_base_bdevs_discovered": 1, 00:14:23.072 "num_base_bdevs_operational": 3, 00:14:23.072 "base_bdevs_list": [ 00:14:23.072 { 00:14:23.072 "name": "BaseBdev1", 00:14:23.072 "uuid": "17d2ffd9-d461-4a51-ab29-ee760b7e836c", 00:14:23.072 "is_configured": true, 00:14:23.072 "data_offset": 0, 00:14:23.072 "data_size": 65536 00:14:23.072 }, 00:14:23.072 { 00:14:23.072 "name": "BaseBdev2", 00:14:23.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.072 "is_configured": false, 00:14:23.072 "data_offset": 0, 00:14:23.072 "data_size": 0 00:14:23.072 }, 00:14:23.072 { 00:14:23.072 "name": "BaseBdev3", 00:14:23.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.072 "is_configured": false, 00:14:23.072 "data_offset": 0, 00:14:23.072 "data_size": 0 00:14:23.072 } 00:14:23.072 ] 00:14:23.072 }' 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.072 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.332 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:23.332 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.332 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.333 [2024-11-10 15:23:29.642408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:23.333 BaseBdev2 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.333 [ 00:14:23.333 { 00:14:23.333 "name": "BaseBdev2", 00:14:23.333 "aliases": [ 00:14:23.333 "68d42c90-d3ee-408e-b482-b7fa8e127a0b" 00:14:23.333 ], 00:14:23.333 "product_name": "Malloc disk", 00:14:23.333 "block_size": 512, 00:14:23.333 "num_blocks": 65536, 00:14:23.333 "uuid": "68d42c90-d3ee-408e-b482-b7fa8e127a0b", 00:14:23.333 "assigned_rate_limits": { 00:14:23.333 "rw_ios_per_sec": 0, 00:14:23.333 "rw_mbytes_per_sec": 0, 00:14:23.333 "r_mbytes_per_sec": 0, 00:14:23.333 "w_mbytes_per_sec": 0 00:14:23.333 }, 00:14:23.333 "claimed": true, 00:14:23.333 "claim_type": "exclusive_write", 00:14:23.333 "zoned": false, 00:14:23.333 "supported_io_types": { 00:14:23.333 "read": true, 00:14:23.333 "write": true, 00:14:23.333 "unmap": true, 00:14:23.333 "flush": true, 00:14:23.333 "reset": true, 00:14:23.333 "nvme_admin": false, 00:14:23.333 "nvme_io": false, 00:14:23.333 "nvme_io_md": false, 00:14:23.333 "write_zeroes": true, 00:14:23.333 "zcopy": true, 00:14:23.333 "get_zone_info": false, 00:14:23.333 "zone_management": false, 00:14:23.333 "zone_append": false, 00:14:23.333 "compare": false, 00:14:23.333 "compare_and_write": false, 00:14:23.333 "abort": true, 00:14:23.333 "seek_hole": false, 00:14:23.333 "seek_data": false, 00:14:23.333 "copy": true, 00:14:23.333 "nvme_iov_md": false 00:14:23.333 }, 00:14:23.333 "memory_domains": [ 00:14:23.333 { 00:14:23.333 "dma_device_id": "system", 00:14:23.333 "dma_device_type": 1 00:14:23.333 }, 00:14:23.333 { 00:14:23.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.333 "dma_device_type": 2 00:14:23.333 } 00:14:23.333 ], 00:14:23.333 "driver_specific": {} 00:14:23.333 } 00:14:23.333 ] 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.333 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.592 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.592 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.592 "name": "Existed_Raid", 00:14:23.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.592 "strip_size_kb": 64, 00:14:23.592 "state": "configuring", 00:14:23.592 "raid_level": "raid5f", 00:14:23.592 "superblock": false, 00:14:23.592 "num_base_bdevs": 3, 00:14:23.592 "num_base_bdevs_discovered": 2, 00:14:23.592 "num_base_bdevs_operational": 3, 00:14:23.592 "base_bdevs_list": [ 00:14:23.592 { 00:14:23.592 "name": "BaseBdev1", 00:14:23.592 "uuid": "17d2ffd9-d461-4a51-ab29-ee760b7e836c", 00:14:23.592 "is_configured": true, 00:14:23.592 "data_offset": 0, 00:14:23.592 "data_size": 65536 00:14:23.592 }, 00:14:23.592 { 00:14:23.592 "name": "BaseBdev2", 00:14:23.592 "uuid": "68d42c90-d3ee-408e-b482-b7fa8e127a0b", 00:14:23.592 "is_configured": true, 00:14:23.592 "data_offset": 0, 00:14:23.592 "data_size": 65536 00:14:23.592 }, 00:14:23.592 { 00:14:23.592 "name": "BaseBdev3", 00:14:23.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.592 "is_configured": false, 00:14:23.592 "data_offset": 0, 00:14:23.592 "data_size": 0 00:14:23.592 } 00:14:23.592 ] 00:14:23.592 }' 00:14:23.592 15:23:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.592 15:23:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.852 [2024-11-10 15:23:30.203909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:23.852 [2024-11-10 15:23:30.204322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:23.852 [2024-11-10 15:23:30.204384] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:23.852 [2024-11-10 15:23:30.205517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:23.852 [2024-11-10 15:23:30.207241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:23.852 [2024-11-10 15:23:30.207300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:23.852 BaseBdev3 00:14:23.852 [2024-11-10 15:23:30.208092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.852 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.112 [ 00:14:24.112 { 00:14:24.112 "name": "BaseBdev3", 00:14:24.112 "aliases": [ 00:14:24.112 "369c0dc2-abb2-4c3d-9c19-d3d41696572f" 00:14:24.112 ], 00:14:24.112 "product_name": "Malloc disk", 00:14:24.112 "block_size": 512, 00:14:24.112 "num_blocks": 65536, 00:14:24.112 "uuid": "369c0dc2-abb2-4c3d-9c19-d3d41696572f", 00:14:24.112 "assigned_rate_limits": { 00:14:24.112 "rw_ios_per_sec": 0, 00:14:24.112 "rw_mbytes_per_sec": 0, 00:14:24.112 "r_mbytes_per_sec": 0, 00:14:24.112 "w_mbytes_per_sec": 0 00:14:24.112 }, 00:14:24.112 "claimed": true, 00:14:24.112 "claim_type": "exclusive_write", 00:14:24.112 "zoned": false, 00:14:24.112 "supported_io_types": { 00:14:24.112 "read": true, 00:14:24.112 "write": true, 00:14:24.112 "unmap": true, 00:14:24.112 "flush": true, 00:14:24.112 "reset": true, 00:14:24.112 "nvme_admin": false, 00:14:24.112 "nvme_io": false, 00:14:24.112 "nvme_io_md": false, 00:14:24.112 "write_zeroes": true, 00:14:24.112 "zcopy": true, 00:14:24.112 "get_zone_info": false, 00:14:24.112 "zone_management": false, 00:14:24.112 "zone_append": false, 00:14:24.112 "compare": false, 00:14:24.112 "compare_and_write": false, 00:14:24.112 "abort": true, 00:14:24.112 "seek_hole": false, 00:14:24.112 "seek_data": false, 00:14:24.112 "copy": true, 00:14:24.112 "nvme_iov_md": false 00:14:24.112 }, 00:14:24.112 "memory_domains": [ 00:14:24.112 { 00:14:24.112 "dma_device_id": "system", 00:14:24.112 "dma_device_type": 1 00:14:24.112 }, 00:14:24.112 { 00:14:24.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:24.112 "dma_device_type": 2 00:14:24.112 } 00:14:24.112 ], 00:14:24.112 "driver_specific": {} 00:14:24.112 } 00:14:24.112 ] 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.112 "name": "Existed_Raid", 00:14:24.112 "uuid": "d528ea83-e514-4f38-8f9c-698b34c6c1c5", 00:14:24.112 "strip_size_kb": 64, 00:14:24.112 "state": "online", 00:14:24.112 "raid_level": "raid5f", 00:14:24.112 "superblock": false, 00:14:24.112 "num_base_bdevs": 3, 00:14:24.112 "num_base_bdevs_discovered": 3, 00:14:24.112 "num_base_bdevs_operational": 3, 00:14:24.112 "base_bdevs_list": [ 00:14:24.112 { 00:14:24.112 "name": "BaseBdev1", 00:14:24.112 "uuid": "17d2ffd9-d461-4a51-ab29-ee760b7e836c", 00:14:24.112 "is_configured": true, 00:14:24.112 "data_offset": 0, 00:14:24.112 "data_size": 65536 00:14:24.112 }, 00:14:24.112 { 00:14:24.112 "name": "BaseBdev2", 00:14:24.112 "uuid": "68d42c90-d3ee-408e-b482-b7fa8e127a0b", 00:14:24.112 "is_configured": true, 00:14:24.112 "data_offset": 0, 00:14:24.112 "data_size": 65536 00:14:24.112 }, 00:14:24.112 { 00:14:24.112 "name": "BaseBdev3", 00:14:24.112 "uuid": "369c0dc2-abb2-4c3d-9c19-d3d41696572f", 00:14:24.112 "is_configured": true, 00:14:24.112 "data_offset": 0, 00:14:24.112 "data_size": 65536 00:14:24.112 } 00:14:24.112 ] 00:14:24.112 }' 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.112 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:24.372 [2024-11-10 15:23:30.664254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:24.372 "name": "Existed_Raid", 00:14:24.372 "aliases": [ 00:14:24.372 "d528ea83-e514-4f38-8f9c-698b34c6c1c5" 00:14:24.372 ], 00:14:24.372 "product_name": "Raid Volume", 00:14:24.372 "block_size": 512, 00:14:24.372 "num_blocks": 131072, 00:14:24.372 "uuid": "d528ea83-e514-4f38-8f9c-698b34c6c1c5", 00:14:24.372 "assigned_rate_limits": { 00:14:24.372 "rw_ios_per_sec": 0, 00:14:24.372 "rw_mbytes_per_sec": 0, 00:14:24.372 "r_mbytes_per_sec": 0, 00:14:24.372 "w_mbytes_per_sec": 0 00:14:24.372 }, 00:14:24.372 "claimed": false, 00:14:24.372 "zoned": false, 00:14:24.372 "supported_io_types": { 00:14:24.372 "read": true, 00:14:24.372 "write": true, 00:14:24.372 "unmap": false, 00:14:24.372 "flush": false, 00:14:24.372 "reset": true, 00:14:24.372 "nvme_admin": false, 00:14:24.372 "nvme_io": false, 00:14:24.372 "nvme_io_md": false, 00:14:24.372 "write_zeroes": true, 00:14:24.372 "zcopy": false, 00:14:24.372 "get_zone_info": false, 00:14:24.372 "zone_management": false, 00:14:24.372 "zone_append": false, 00:14:24.372 "compare": false, 00:14:24.372 "compare_and_write": false, 00:14:24.372 "abort": false, 00:14:24.372 "seek_hole": false, 00:14:24.372 "seek_data": false, 00:14:24.372 "copy": false, 00:14:24.372 "nvme_iov_md": false 00:14:24.372 }, 00:14:24.372 "driver_specific": { 00:14:24.372 "raid": { 00:14:24.372 "uuid": "d528ea83-e514-4f38-8f9c-698b34c6c1c5", 00:14:24.372 "strip_size_kb": 64, 00:14:24.372 "state": "online", 00:14:24.372 "raid_level": "raid5f", 00:14:24.372 "superblock": false, 00:14:24.372 "num_base_bdevs": 3, 00:14:24.372 "num_base_bdevs_discovered": 3, 00:14:24.372 "num_base_bdevs_operational": 3, 00:14:24.372 "base_bdevs_list": [ 00:14:24.372 { 00:14:24.372 "name": "BaseBdev1", 00:14:24.372 "uuid": "17d2ffd9-d461-4a51-ab29-ee760b7e836c", 00:14:24.372 "is_configured": true, 00:14:24.372 "data_offset": 0, 00:14:24.372 "data_size": 65536 00:14:24.372 }, 00:14:24.372 { 00:14:24.372 "name": "BaseBdev2", 00:14:24.372 "uuid": "68d42c90-d3ee-408e-b482-b7fa8e127a0b", 00:14:24.372 "is_configured": true, 00:14:24.372 "data_offset": 0, 00:14:24.372 "data_size": 65536 00:14:24.372 }, 00:14:24.372 { 00:14:24.372 "name": "BaseBdev3", 00:14:24.372 "uuid": "369c0dc2-abb2-4c3d-9c19-d3d41696572f", 00:14:24.372 "is_configured": true, 00:14:24.372 "data_offset": 0, 00:14:24.372 "data_size": 65536 00:14:24.372 } 00:14:24.372 ] 00:14:24.372 } 00:14:24.372 } 00:14:24.372 }' 00:14:24.372 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:24.633 BaseBdev2 00:14:24.633 BaseBdev3' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.633 [2024-11-10 15:23:30.968265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:24.633 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.634 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.634 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.634 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.634 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.634 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.894 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.894 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.894 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.894 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.894 15:23:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.894 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.894 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.894 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.894 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.894 "name": "Existed_Raid", 00:14:24.894 "uuid": "d528ea83-e514-4f38-8f9c-698b34c6c1c5", 00:14:24.894 "strip_size_kb": 64, 00:14:24.894 "state": "online", 00:14:24.894 "raid_level": "raid5f", 00:14:24.894 "superblock": false, 00:14:24.894 "num_base_bdevs": 3, 00:14:24.894 "num_base_bdevs_discovered": 2, 00:14:24.894 "num_base_bdevs_operational": 2, 00:14:24.894 "base_bdevs_list": [ 00:14:24.894 { 00:14:24.894 "name": null, 00:14:24.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.894 "is_configured": false, 00:14:24.894 "data_offset": 0, 00:14:24.894 "data_size": 65536 00:14:24.894 }, 00:14:24.894 { 00:14:24.894 "name": "BaseBdev2", 00:14:24.894 "uuid": "68d42c90-d3ee-408e-b482-b7fa8e127a0b", 00:14:24.894 "is_configured": true, 00:14:24.894 "data_offset": 0, 00:14:24.894 "data_size": 65536 00:14:24.894 }, 00:14:24.894 { 00:14:24.894 "name": "BaseBdev3", 00:14:24.894 "uuid": "369c0dc2-abb2-4c3d-9c19-d3d41696572f", 00:14:24.894 "is_configured": true, 00:14:24.894 "data_offset": 0, 00:14:24.894 "data_size": 65536 00:14:24.894 } 00:14:24.894 ] 00:14:24.894 }' 00:14:24.894 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.894 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.154 [2024-11-10 15:23:31.468765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.154 [2024-11-10 15:23:31.468926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:25.154 [2024-11-10 15:23:31.489464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.154 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.415 [2024-11-10 15:23:31.549504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:25.415 [2024-11-10 15:23:31.549600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.415 BaseBdev2 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:25.415 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.416 [ 00:14:25.416 { 00:14:25.416 "name": "BaseBdev2", 00:14:25.416 "aliases": [ 00:14:25.416 "8212ebab-e00a-47d5-bfb8-7f1c132812e8" 00:14:25.416 ], 00:14:25.416 "product_name": "Malloc disk", 00:14:25.416 "block_size": 512, 00:14:25.416 "num_blocks": 65536, 00:14:25.416 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:25.416 "assigned_rate_limits": { 00:14:25.416 "rw_ios_per_sec": 0, 00:14:25.416 "rw_mbytes_per_sec": 0, 00:14:25.416 "r_mbytes_per_sec": 0, 00:14:25.416 "w_mbytes_per_sec": 0 00:14:25.416 }, 00:14:25.416 "claimed": false, 00:14:25.416 "zoned": false, 00:14:25.416 "supported_io_types": { 00:14:25.416 "read": true, 00:14:25.416 "write": true, 00:14:25.416 "unmap": true, 00:14:25.416 "flush": true, 00:14:25.416 "reset": true, 00:14:25.416 "nvme_admin": false, 00:14:25.416 "nvme_io": false, 00:14:25.416 "nvme_io_md": false, 00:14:25.416 "write_zeroes": true, 00:14:25.416 "zcopy": true, 00:14:25.416 "get_zone_info": false, 00:14:25.416 "zone_management": false, 00:14:25.416 "zone_append": false, 00:14:25.416 "compare": false, 00:14:25.416 "compare_and_write": false, 00:14:25.416 "abort": true, 00:14:25.416 "seek_hole": false, 00:14:25.416 "seek_data": false, 00:14:25.416 "copy": true, 00:14:25.416 "nvme_iov_md": false 00:14:25.416 }, 00:14:25.416 "memory_domains": [ 00:14:25.416 { 00:14:25.416 "dma_device_id": "system", 00:14:25.416 "dma_device_type": 1 00:14:25.416 }, 00:14:25.416 { 00:14:25.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.416 "dma_device_type": 2 00:14:25.416 } 00:14:25.416 ], 00:14:25.416 "driver_specific": {} 00:14:25.416 } 00:14:25.416 ] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.416 BaseBdev3 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.416 [ 00:14:25.416 { 00:14:25.416 "name": "BaseBdev3", 00:14:25.416 "aliases": [ 00:14:25.416 "b55e5fe9-78f5-4e0d-b3c8-861319342e27" 00:14:25.416 ], 00:14:25.416 "product_name": "Malloc disk", 00:14:25.416 "block_size": 512, 00:14:25.416 "num_blocks": 65536, 00:14:25.416 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:25.416 "assigned_rate_limits": { 00:14:25.416 "rw_ios_per_sec": 0, 00:14:25.416 "rw_mbytes_per_sec": 0, 00:14:25.416 "r_mbytes_per_sec": 0, 00:14:25.416 "w_mbytes_per_sec": 0 00:14:25.416 }, 00:14:25.416 "claimed": false, 00:14:25.416 "zoned": false, 00:14:25.416 "supported_io_types": { 00:14:25.416 "read": true, 00:14:25.416 "write": true, 00:14:25.416 "unmap": true, 00:14:25.416 "flush": true, 00:14:25.416 "reset": true, 00:14:25.416 "nvme_admin": false, 00:14:25.416 "nvme_io": false, 00:14:25.416 "nvme_io_md": false, 00:14:25.416 "write_zeroes": true, 00:14:25.416 "zcopy": true, 00:14:25.416 "get_zone_info": false, 00:14:25.416 "zone_management": false, 00:14:25.416 "zone_append": false, 00:14:25.416 "compare": false, 00:14:25.416 "compare_and_write": false, 00:14:25.416 "abort": true, 00:14:25.416 "seek_hole": false, 00:14:25.416 "seek_data": false, 00:14:25.416 "copy": true, 00:14:25.416 "nvme_iov_md": false 00:14:25.416 }, 00:14:25.416 "memory_domains": [ 00:14:25.416 { 00:14:25.416 "dma_device_id": "system", 00:14:25.416 "dma_device_type": 1 00:14:25.416 }, 00:14:25.416 { 00:14:25.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.416 "dma_device_type": 2 00:14:25.416 } 00:14:25.416 ], 00:14:25.416 "driver_specific": {} 00:14:25.416 } 00:14:25.416 ] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.416 [2024-11-10 15:23:31.744150] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:25.416 [2024-11-10 15:23:31.744267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:25.416 [2024-11-10 15:23:31.744311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.416 [2024-11-10 15:23:31.746410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.416 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.676 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.676 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.676 "name": "Existed_Raid", 00:14:25.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.676 "strip_size_kb": 64, 00:14:25.676 "state": "configuring", 00:14:25.676 "raid_level": "raid5f", 00:14:25.676 "superblock": false, 00:14:25.676 "num_base_bdevs": 3, 00:14:25.676 "num_base_bdevs_discovered": 2, 00:14:25.676 "num_base_bdevs_operational": 3, 00:14:25.676 "base_bdevs_list": [ 00:14:25.676 { 00:14:25.676 "name": "BaseBdev1", 00:14:25.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.676 "is_configured": false, 00:14:25.676 "data_offset": 0, 00:14:25.676 "data_size": 0 00:14:25.676 }, 00:14:25.676 { 00:14:25.676 "name": "BaseBdev2", 00:14:25.676 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:25.676 "is_configured": true, 00:14:25.676 "data_offset": 0, 00:14:25.676 "data_size": 65536 00:14:25.676 }, 00:14:25.676 { 00:14:25.676 "name": "BaseBdev3", 00:14:25.676 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:25.676 "is_configured": true, 00:14:25.676 "data_offset": 0, 00:14:25.676 "data_size": 65536 00:14:25.676 } 00:14:25.676 ] 00:14:25.676 }' 00:14:25.676 15:23:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.676 15:23:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.936 [2024-11-10 15:23:32.224277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.936 "name": "Existed_Raid", 00:14:25.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.936 "strip_size_kb": 64, 00:14:25.936 "state": "configuring", 00:14:25.936 "raid_level": "raid5f", 00:14:25.936 "superblock": false, 00:14:25.936 "num_base_bdevs": 3, 00:14:25.936 "num_base_bdevs_discovered": 1, 00:14:25.936 "num_base_bdevs_operational": 3, 00:14:25.936 "base_bdevs_list": [ 00:14:25.936 { 00:14:25.936 "name": "BaseBdev1", 00:14:25.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.936 "is_configured": false, 00:14:25.936 "data_offset": 0, 00:14:25.936 "data_size": 0 00:14:25.936 }, 00:14:25.936 { 00:14:25.936 "name": null, 00:14:25.936 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:25.936 "is_configured": false, 00:14:25.936 "data_offset": 0, 00:14:25.936 "data_size": 65536 00:14:25.936 }, 00:14:25.936 { 00:14:25.936 "name": "BaseBdev3", 00:14:25.936 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:25.936 "is_configured": true, 00:14:25.936 "data_offset": 0, 00:14:25.936 "data_size": 65536 00:14:25.936 } 00:14:25.936 ] 00:14:25.936 }' 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.936 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.506 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.506 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.506 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.506 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.506 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.506 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 [2024-11-10 15:23:32.757280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:26.507 BaseBdev1 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 [ 00:14:26.507 { 00:14:26.507 "name": "BaseBdev1", 00:14:26.507 "aliases": [ 00:14:26.507 "65997946-a7d0-4fe9-a0a1-4c2289176485" 00:14:26.507 ], 00:14:26.507 "product_name": "Malloc disk", 00:14:26.507 "block_size": 512, 00:14:26.507 "num_blocks": 65536, 00:14:26.507 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:26.507 "assigned_rate_limits": { 00:14:26.507 "rw_ios_per_sec": 0, 00:14:26.507 "rw_mbytes_per_sec": 0, 00:14:26.507 "r_mbytes_per_sec": 0, 00:14:26.507 "w_mbytes_per_sec": 0 00:14:26.507 }, 00:14:26.507 "claimed": true, 00:14:26.507 "claim_type": "exclusive_write", 00:14:26.507 "zoned": false, 00:14:26.507 "supported_io_types": { 00:14:26.507 "read": true, 00:14:26.507 "write": true, 00:14:26.507 "unmap": true, 00:14:26.507 "flush": true, 00:14:26.507 "reset": true, 00:14:26.507 "nvme_admin": false, 00:14:26.507 "nvme_io": false, 00:14:26.507 "nvme_io_md": false, 00:14:26.507 "write_zeroes": true, 00:14:26.507 "zcopy": true, 00:14:26.507 "get_zone_info": false, 00:14:26.507 "zone_management": false, 00:14:26.507 "zone_append": false, 00:14:26.507 "compare": false, 00:14:26.507 "compare_and_write": false, 00:14:26.507 "abort": true, 00:14:26.507 "seek_hole": false, 00:14:26.507 "seek_data": false, 00:14:26.507 "copy": true, 00:14:26.507 "nvme_iov_md": false 00:14:26.507 }, 00:14:26.507 "memory_domains": [ 00:14:26.507 { 00:14:26.507 "dma_device_id": "system", 00:14:26.507 "dma_device_type": 1 00:14:26.507 }, 00:14:26.507 { 00:14:26.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.507 "dma_device_type": 2 00:14:26.507 } 00:14:26.507 ], 00:14:26.507 "driver_specific": {} 00:14:26.507 } 00:14:26.507 ] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.507 "name": "Existed_Raid", 00:14:26.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.507 "strip_size_kb": 64, 00:14:26.507 "state": "configuring", 00:14:26.507 "raid_level": "raid5f", 00:14:26.507 "superblock": false, 00:14:26.507 "num_base_bdevs": 3, 00:14:26.507 "num_base_bdevs_discovered": 2, 00:14:26.507 "num_base_bdevs_operational": 3, 00:14:26.507 "base_bdevs_list": [ 00:14:26.507 { 00:14:26.507 "name": "BaseBdev1", 00:14:26.507 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:26.507 "is_configured": true, 00:14:26.507 "data_offset": 0, 00:14:26.507 "data_size": 65536 00:14:26.507 }, 00:14:26.507 { 00:14:26.507 "name": null, 00:14:26.507 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:26.507 "is_configured": false, 00:14:26.507 "data_offset": 0, 00:14:26.507 "data_size": 65536 00:14:26.507 }, 00:14:26.507 { 00:14:26.507 "name": "BaseBdev3", 00:14:26.507 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:26.507 "is_configured": true, 00:14:26.507 "data_offset": 0, 00:14:26.507 "data_size": 65536 00:14:26.507 } 00:14:26.507 ] 00:14:26.507 }' 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.507 15:23:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.121 [2024-11-10 15:23:33.273481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.121 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.122 "name": "Existed_Raid", 00:14:27.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.122 "strip_size_kb": 64, 00:14:27.122 "state": "configuring", 00:14:27.122 "raid_level": "raid5f", 00:14:27.122 "superblock": false, 00:14:27.122 "num_base_bdevs": 3, 00:14:27.122 "num_base_bdevs_discovered": 1, 00:14:27.122 "num_base_bdevs_operational": 3, 00:14:27.122 "base_bdevs_list": [ 00:14:27.122 { 00:14:27.122 "name": "BaseBdev1", 00:14:27.122 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:27.122 "is_configured": true, 00:14:27.122 "data_offset": 0, 00:14:27.122 "data_size": 65536 00:14:27.122 }, 00:14:27.122 { 00:14:27.122 "name": null, 00:14:27.122 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:27.122 "is_configured": false, 00:14:27.122 "data_offset": 0, 00:14:27.122 "data_size": 65536 00:14:27.122 }, 00:14:27.122 { 00:14:27.122 "name": null, 00:14:27.122 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:27.122 "is_configured": false, 00:14:27.122 "data_offset": 0, 00:14:27.122 "data_size": 65536 00:14:27.122 } 00:14:27.122 ] 00:14:27.122 }' 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.122 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.714 [2024-11-10 15:23:33.817634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.714 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.715 "name": "Existed_Raid", 00:14:27.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.715 "strip_size_kb": 64, 00:14:27.715 "state": "configuring", 00:14:27.715 "raid_level": "raid5f", 00:14:27.715 "superblock": false, 00:14:27.715 "num_base_bdevs": 3, 00:14:27.715 "num_base_bdevs_discovered": 2, 00:14:27.715 "num_base_bdevs_operational": 3, 00:14:27.715 "base_bdevs_list": [ 00:14:27.715 { 00:14:27.715 "name": "BaseBdev1", 00:14:27.715 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:27.715 "is_configured": true, 00:14:27.715 "data_offset": 0, 00:14:27.715 "data_size": 65536 00:14:27.715 }, 00:14:27.715 { 00:14:27.715 "name": null, 00:14:27.715 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:27.715 "is_configured": false, 00:14:27.715 "data_offset": 0, 00:14:27.715 "data_size": 65536 00:14:27.715 }, 00:14:27.715 { 00:14:27.715 "name": "BaseBdev3", 00:14:27.715 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:27.715 "is_configured": true, 00:14:27.715 "data_offset": 0, 00:14:27.715 "data_size": 65536 00:14:27.715 } 00:14:27.715 ] 00:14:27.715 }' 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.715 15:23:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.975 [2024-11-10 15:23:34.313818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:27.975 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.235 "name": "Existed_Raid", 00:14:28.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.235 "strip_size_kb": 64, 00:14:28.235 "state": "configuring", 00:14:28.235 "raid_level": "raid5f", 00:14:28.235 "superblock": false, 00:14:28.235 "num_base_bdevs": 3, 00:14:28.235 "num_base_bdevs_discovered": 1, 00:14:28.235 "num_base_bdevs_operational": 3, 00:14:28.235 "base_bdevs_list": [ 00:14:28.235 { 00:14:28.235 "name": null, 00:14:28.235 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:28.235 "is_configured": false, 00:14:28.235 "data_offset": 0, 00:14:28.235 "data_size": 65536 00:14:28.235 }, 00:14:28.235 { 00:14:28.235 "name": null, 00:14:28.235 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:28.235 "is_configured": false, 00:14:28.235 "data_offset": 0, 00:14:28.235 "data_size": 65536 00:14:28.235 }, 00:14:28.235 { 00:14:28.235 "name": "BaseBdev3", 00:14:28.235 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:28.235 "is_configured": true, 00:14:28.235 "data_offset": 0, 00:14:28.235 "data_size": 65536 00:14:28.235 } 00:14:28.235 ] 00:14:28.235 }' 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.235 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.495 [2024-11-10 15:23:34.797896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:28.495 "name": "Existed_Raid", 00:14:28.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.495 "strip_size_kb": 64, 00:14:28.495 "state": "configuring", 00:14:28.495 "raid_level": "raid5f", 00:14:28.495 "superblock": false, 00:14:28.495 "num_base_bdevs": 3, 00:14:28.495 "num_base_bdevs_discovered": 2, 00:14:28.495 "num_base_bdevs_operational": 3, 00:14:28.495 "base_bdevs_list": [ 00:14:28.495 { 00:14:28.495 "name": null, 00:14:28.495 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:28.495 "is_configured": false, 00:14:28.495 "data_offset": 0, 00:14:28.495 "data_size": 65536 00:14:28.495 }, 00:14:28.495 { 00:14:28.495 "name": "BaseBdev2", 00:14:28.495 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:28.495 "is_configured": true, 00:14:28.495 "data_offset": 0, 00:14:28.495 "data_size": 65536 00:14:28.495 }, 00:14:28.495 { 00:14:28.495 "name": "BaseBdev3", 00:14:28.495 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:28.495 "is_configured": true, 00:14:28.495 "data_offset": 0, 00:14:28.495 "data_size": 65536 00:14:28.495 } 00:14:28.495 ] 00:14:28.495 }' 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:28.495 15:23:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65997946-a7d0-4fe9-a0a1-4c2289176485 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.065 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.065 [2024-11-10 15:23:35.386844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:29.065 [2024-11-10 15:23:35.386965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:29.065 [2024-11-10 15:23:35.386991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:29.066 [2024-11-10 15:23:35.387358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:29.066 [2024-11-10 15:23:35.387867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:29.066 [2024-11-10 15:23:35.387923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:29.066 [2024-11-10 15:23:35.388167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.066 NewBaseBdev 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.066 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.066 [ 00:14:29.066 { 00:14:29.066 "name": "NewBaseBdev", 00:14:29.066 "aliases": [ 00:14:29.066 "65997946-a7d0-4fe9-a0a1-4c2289176485" 00:14:29.066 ], 00:14:29.066 "product_name": "Malloc disk", 00:14:29.066 "block_size": 512, 00:14:29.066 "num_blocks": 65536, 00:14:29.066 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:29.066 "assigned_rate_limits": { 00:14:29.066 "rw_ios_per_sec": 0, 00:14:29.066 "rw_mbytes_per_sec": 0, 00:14:29.066 "r_mbytes_per_sec": 0, 00:14:29.066 "w_mbytes_per_sec": 0 00:14:29.066 }, 00:14:29.066 "claimed": true, 00:14:29.066 "claim_type": "exclusive_write", 00:14:29.066 "zoned": false, 00:14:29.066 "supported_io_types": { 00:14:29.066 "read": true, 00:14:29.066 "write": true, 00:14:29.066 "unmap": true, 00:14:29.066 "flush": true, 00:14:29.066 "reset": true, 00:14:29.066 "nvme_admin": false, 00:14:29.066 "nvme_io": false, 00:14:29.066 "nvme_io_md": false, 00:14:29.066 "write_zeroes": true, 00:14:29.066 "zcopy": true, 00:14:29.066 "get_zone_info": false, 00:14:29.066 "zone_management": false, 00:14:29.066 "zone_append": false, 00:14:29.066 "compare": false, 00:14:29.066 "compare_and_write": false, 00:14:29.066 "abort": true, 00:14:29.066 "seek_hole": false, 00:14:29.066 "seek_data": false, 00:14:29.066 "copy": true, 00:14:29.066 "nvme_iov_md": false 00:14:29.066 }, 00:14:29.066 "memory_domains": [ 00:14:29.066 { 00:14:29.066 "dma_device_id": "system", 00:14:29.066 "dma_device_type": 1 00:14:29.066 }, 00:14:29.066 { 00:14:29.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.326 "dma_device_type": 2 00:14:29.326 } 00:14:29.326 ], 00:14:29.326 "driver_specific": {} 00:14:29.326 } 00:14:29.326 ] 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.326 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.327 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.327 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.327 "name": "Existed_Raid", 00:14:29.327 "uuid": "7d2d27ca-a13b-4544-8980-6e9de22cbf16", 00:14:29.327 "strip_size_kb": 64, 00:14:29.327 "state": "online", 00:14:29.327 "raid_level": "raid5f", 00:14:29.327 "superblock": false, 00:14:29.327 "num_base_bdevs": 3, 00:14:29.327 "num_base_bdevs_discovered": 3, 00:14:29.327 "num_base_bdevs_operational": 3, 00:14:29.327 "base_bdevs_list": [ 00:14:29.327 { 00:14:29.327 "name": "NewBaseBdev", 00:14:29.327 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:29.327 "is_configured": true, 00:14:29.327 "data_offset": 0, 00:14:29.327 "data_size": 65536 00:14:29.327 }, 00:14:29.327 { 00:14:29.327 "name": "BaseBdev2", 00:14:29.327 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:29.327 "is_configured": true, 00:14:29.327 "data_offset": 0, 00:14:29.327 "data_size": 65536 00:14:29.327 }, 00:14:29.327 { 00:14:29.327 "name": "BaseBdev3", 00:14:29.327 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:29.327 "is_configured": true, 00:14:29.327 "data_offset": 0, 00:14:29.327 "data_size": 65536 00:14:29.327 } 00:14:29.327 ] 00:14:29.327 }' 00:14:29.327 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.327 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:29.587 [2024-11-10 15:23:35.859197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:29.587 "name": "Existed_Raid", 00:14:29.587 "aliases": [ 00:14:29.587 "7d2d27ca-a13b-4544-8980-6e9de22cbf16" 00:14:29.587 ], 00:14:29.587 "product_name": "Raid Volume", 00:14:29.587 "block_size": 512, 00:14:29.587 "num_blocks": 131072, 00:14:29.587 "uuid": "7d2d27ca-a13b-4544-8980-6e9de22cbf16", 00:14:29.587 "assigned_rate_limits": { 00:14:29.587 "rw_ios_per_sec": 0, 00:14:29.587 "rw_mbytes_per_sec": 0, 00:14:29.587 "r_mbytes_per_sec": 0, 00:14:29.587 "w_mbytes_per_sec": 0 00:14:29.587 }, 00:14:29.587 "claimed": false, 00:14:29.587 "zoned": false, 00:14:29.587 "supported_io_types": { 00:14:29.587 "read": true, 00:14:29.587 "write": true, 00:14:29.587 "unmap": false, 00:14:29.587 "flush": false, 00:14:29.587 "reset": true, 00:14:29.587 "nvme_admin": false, 00:14:29.587 "nvme_io": false, 00:14:29.587 "nvme_io_md": false, 00:14:29.587 "write_zeroes": true, 00:14:29.587 "zcopy": false, 00:14:29.587 "get_zone_info": false, 00:14:29.587 "zone_management": false, 00:14:29.587 "zone_append": false, 00:14:29.587 "compare": false, 00:14:29.587 "compare_and_write": false, 00:14:29.587 "abort": false, 00:14:29.587 "seek_hole": false, 00:14:29.587 "seek_data": false, 00:14:29.587 "copy": false, 00:14:29.587 "nvme_iov_md": false 00:14:29.587 }, 00:14:29.587 "driver_specific": { 00:14:29.587 "raid": { 00:14:29.587 "uuid": "7d2d27ca-a13b-4544-8980-6e9de22cbf16", 00:14:29.587 "strip_size_kb": 64, 00:14:29.587 "state": "online", 00:14:29.587 "raid_level": "raid5f", 00:14:29.587 "superblock": false, 00:14:29.587 "num_base_bdevs": 3, 00:14:29.587 "num_base_bdevs_discovered": 3, 00:14:29.587 "num_base_bdevs_operational": 3, 00:14:29.587 "base_bdevs_list": [ 00:14:29.587 { 00:14:29.587 "name": "NewBaseBdev", 00:14:29.587 "uuid": "65997946-a7d0-4fe9-a0a1-4c2289176485", 00:14:29.587 "is_configured": true, 00:14:29.587 "data_offset": 0, 00:14:29.587 "data_size": 65536 00:14:29.587 }, 00:14:29.587 { 00:14:29.587 "name": "BaseBdev2", 00:14:29.587 "uuid": "8212ebab-e00a-47d5-bfb8-7f1c132812e8", 00:14:29.587 "is_configured": true, 00:14:29.587 "data_offset": 0, 00:14:29.587 "data_size": 65536 00:14:29.587 }, 00:14:29.587 { 00:14:29.587 "name": "BaseBdev3", 00:14:29.587 "uuid": "b55e5fe9-78f5-4e0d-b3c8-861319342e27", 00:14:29.587 "is_configured": true, 00:14:29.587 "data_offset": 0, 00:14:29.587 "data_size": 65536 00:14:29.587 } 00:14:29.587 ] 00:14:29.587 } 00:14:29.587 } 00:14:29.587 }' 00:14:29.587 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:29.848 BaseBdev2 00:14:29.848 BaseBdev3' 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.848 15:23:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.848 [2024-11-10 15:23:36.135085] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.848 [2024-11-10 15:23:36.135154] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:29.848 [2024-11-10 15:23:36.135274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:29.848 [2024-11-10 15:23:36.135578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:29.848 [2024-11-10 15:23:36.135629] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 91837 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 91837 ']' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 91837 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 91837 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 91837' 00:14:29.848 killing process with pid 91837 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 91837 00:14:29.848 [2024-11-10 15:23:36.174566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.848 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 91837 00:14:30.107 [2024-11-10 15:23:36.234638] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:30.368 ************************************ 00:14:30.368 END TEST raid5f_state_function_test 00:14:30.368 ************************************ 00:14:30.368 00:14:30.368 real 0m9.325s 00:14:30.368 user 0m15.516s 00:14:30.368 sys 0m2.142s 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.368 15:23:36 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:30.368 15:23:36 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:30.368 15:23:36 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:30.368 15:23:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:30.368 ************************************ 00:14:30.368 START TEST raid5f_state_function_test_sb 00:14:30.368 ************************************ 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 3 true 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=92447 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92447' 00:14:30.368 Process raid pid: 92447 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 92447 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 92447 ']' 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.368 15:23:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.628 [2024-11-10 15:23:36.743528] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:14:30.628 [2024-11-10 15:23:36.743756] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.628 [2024-11-10 15:23:36.877970] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:30.628 [2024-11-10 15:23:36.914335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.628 [2024-11-10 15:23:36.955499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.888 [2024-11-10 15:23:37.032370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:30.888 [2024-11-10 15:23:37.032411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.458 [2024-11-10 15:23:37.557332] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.458 [2024-11-10 15:23:37.557394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.458 [2024-11-10 15:23:37.557409] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.458 [2024-11-10 15:23:37.557417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.458 [2024-11-10 15:23:37.557431] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.458 [2024-11-10 15:23:37.557438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.458 "name": "Existed_Raid", 00:14:31.458 "uuid": "d7d4546c-4120-4c0f-a865-6b46f733b250", 00:14:31.458 "strip_size_kb": 64, 00:14:31.458 "state": "configuring", 00:14:31.458 "raid_level": "raid5f", 00:14:31.458 "superblock": true, 00:14:31.458 "num_base_bdevs": 3, 00:14:31.458 "num_base_bdevs_discovered": 0, 00:14:31.458 "num_base_bdevs_operational": 3, 00:14:31.458 "base_bdevs_list": [ 00:14:31.458 { 00:14:31.458 "name": "BaseBdev1", 00:14:31.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.458 "is_configured": false, 00:14:31.458 "data_offset": 0, 00:14:31.458 "data_size": 0 00:14:31.458 }, 00:14:31.458 { 00:14:31.458 "name": "BaseBdev2", 00:14:31.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.458 "is_configured": false, 00:14:31.458 "data_offset": 0, 00:14:31.458 "data_size": 0 00:14:31.458 }, 00:14:31.458 { 00:14:31.458 "name": "BaseBdev3", 00:14:31.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.458 "is_configured": false, 00:14:31.458 "data_offset": 0, 00:14:31.458 "data_size": 0 00:14:31.458 } 00:14:31.458 ] 00:14:31.458 }' 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.458 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 [2024-11-10 15:23:37.977345] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.719 [2024-11-10 15:23:37.977467] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 [2024-11-10 15:23:37.989371] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:31.719 [2024-11-10 15:23:37.989450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:31.719 [2024-11-10 15:23:37.989497] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:31.719 [2024-11-10 15:23:37.989517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:31.719 [2024-11-10 15:23:37.989537] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:31.719 [2024-11-10 15:23:37.989560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.719 15:23:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 [2024-11-10 15:23:38.016483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:31.719 BaseBdev1 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 [ 00:14:31.719 { 00:14:31.719 "name": "BaseBdev1", 00:14:31.719 "aliases": [ 00:14:31.719 "ca612afc-63dc-4d63-b65f-8803b10fc5cc" 00:14:31.719 ], 00:14:31.719 "product_name": "Malloc disk", 00:14:31.719 "block_size": 512, 00:14:31.719 "num_blocks": 65536, 00:14:31.719 "uuid": "ca612afc-63dc-4d63-b65f-8803b10fc5cc", 00:14:31.719 "assigned_rate_limits": { 00:14:31.719 "rw_ios_per_sec": 0, 00:14:31.719 "rw_mbytes_per_sec": 0, 00:14:31.719 "r_mbytes_per_sec": 0, 00:14:31.719 "w_mbytes_per_sec": 0 00:14:31.719 }, 00:14:31.719 "claimed": true, 00:14:31.719 "claim_type": "exclusive_write", 00:14:31.719 "zoned": false, 00:14:31.719 "supported_io_types": { 00:14:31.719 "read": true, 00:14:31.719 "write": true, 00:14:31.719 "unmap": true, 00:14:31.719 "flush": true, 00:14:31.719 "reset": true, 00:14:31.719 "nvme_admin": false, 00:14:31.719 "nvme_io": false, 00:14:31.719 "nvme_io_md": false, 00:14:31.719 "write_zeroes": true, 00:14:31.719 "zcopy": true, 00:14:31.719 "get_zone_info": false, 00:14:31.719 "zone_management": false, 00:14:31.719 "zone_append": false, 00:14:31.719 "compare": false, 00:14:31.719 "compare_and_write": false, 00:14:31.719 "abort": true, 00:14:31.719 "seek_hole": false, 00:14:31.719 "seek_data": false, 00:14:31.719 "copy": true, 00:14:31.719 "nvme_iov_md": false 00:14:31.719 }, 00:14:31.719 "memory_domains": [ 00:14:31.719 { 00:14:31.719 "dma_device_id": "system", 00:14:31.719 "dma_device_type": 1 00:14:31.719 }, 00:14:31.719 { 00:14:31.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.719 "dma_device_type": 2 00:14:31.719 } 00:14:31.719 ], 00:14:31.719 "driver_specific": {} 00:14:31.719 } 00:14:31.719 ] 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.719 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.979 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.979 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.979 "name": "Existed_Raid", 00:14:31.979 "uuid": "6732581d-6581-45c2-b26c-3f27fe0d92e1", 00:14:31.979 "strip_size_kb": 64, 00:14:31.979 "state": "configuring", 00:14:31.979 "raid_level": "raid5f", 00:14:31.979 "superblock": true, 00:14:31.979 "num_base_bdevs": 3, 00:14:31.979 "num_base_bdevs_discovered": 1, 00:14:31.979 "num_base_bdevs_operational": 3, 00:14:31.979 "base_bdevs_list": [ 00:14:31.979 { 00:14:31.979 "name": "BaseBdev1", 00:14:31.979 "uuid": "ca612afc-63dc-4d63-b65f-8803b10fc5cc", 00:14:31.979 "is_configured": true, 00:14:31.979 "data_offset": 2048, 00:14:31.979 "data_size": 63488 00:14:31.979 }, 00:14:31.979 { 00:14:31.979 "name": "BaseBdev2", 00:14:31.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.979 "is_configured": false, 00:14:31.979 "data_offset": 0, 00:14:31.979 "data_size": 0 00:14:31.979 }, 00:14:31.979 { 00:14:31.979 "name": "BaseBdev3", 00:14:31.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.979 "is_configured": false, 00:14:31.979 "data_offset": 0, 00:14:31.979 "data_size": 0 00:14:31.979 } 00:14:31.979 ] 00:14:31.979 }' 00:14:31.979 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.979 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 [2024-11-10 15:23:38.488677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:32.240 [2024-11-10 15:23:38.488835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 [2024-11-10 15:23:38.500734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.240 [2024-11-10 15:23:38.503008] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:32.240 [2024-11-10 15:23:38.503063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:32.240 [2024-11-10 15:23:38.503079] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:32.240 [2024-11-10 15:23:38.503087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.240 "name": "Existed_Raid", 00:14:32.240 "uuid": "202a9483-06f6-46e1-9114-179bb90ad030", 00:14:32.240 "strip_size_kb": 64, 00:14:32.240 "state": "configuring", 00:14:32.240 "raid_level": "raid5f", 00:14:32.240 "superblock": true, 00:14:32.240 "num_base_bdevs": 3, 00:14:32.240 "num_base_bdevs_discovered": 1, 00:14:32.240 "num_base_bdevs_operational": 3, 00:14:32.240 "base_bdevs_list": [ 00:14:32.240 { 00:14:32.240 "name": "BaseBdev1", 00:14:32.240 "uuid": "ca612afc-63dc-4d63-b65f-8803b10fc5cc", 00:14:32.240 "is_configured": true, 00:14:32.240 "data_offset": 2048, 00:14:32.240 "data_size": 63488 00:14:32.240 }, 00:14:32.240 { 00:14:32.240 "name": "BaseBdev2", 00:14:32.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.240 "is_configured": false, 00:14:32.240 "data_offset": 0, 00:14:32.240 "data_size": 0 00:14:32.240 }, 00:14:32.240 { 00:14:32.240 "name": "BaseBdev3", 00:14:32.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.240 "is_configured": false, 00:14:32.240 "data_offset": 0, 00:14:32.240 "data_size": 0 00:14:32.240 } 00:14:32.240 ] 00:14:32.240 }' 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.240 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.810 [2024-11-10 15:23:38.973643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.810 BaseBdev2 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.810 15:23:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.810 [ 00:14:32.810 { 00:14:32.810 "name": "BaseBdev2", 00:14:32.810 "aliases": [ 00:14:32.810 "1cda0593-0de8-4de2-8cfd-4895e17a4845" 00:14:32.810 ], 00:14:32.810 "product_name": "Malloc disk", 00:14:32.810 "block_size": 512, 00:14:32.810 "num_blocks": 65536, 00:14:32.810 "uuid": "1cda0593-0de8-4de2-8cfd-4895e17a4845", 00:14:32.810 "assigned_rate_limits": { 00:14:32.810 "rw_ios_per_sec": 0, 00:14:32.810 "rw_mbytes_per_sec": 0, 00:14:32.810 "r_mbytes_per_sec": 0, 00:14:32.810 "w_mbytes_per_sec": 0 00:14:32.810 }, 00:14:32.810 "claimed": true, 00:14:32.810 "claim_type": "exclusive_write", 00:14:32.810 "zoned": false, 00:14:32.810 "supported_io_types": { 00:14:32.810 "read": true, 00:14:32.810 "write": true, 00:14:32.810 "unmap": true, 00:14:32.810 "flush": true, 00:14:32.810 "reset": true, 00:14:32.810 "nvme_admin": false, 00:14:32.810 "nvme_io": false, 00:14:32.810 "nvme_io_md": false, 00:14:32.810 "write_zeroes": true, 00:14:32.810 "zcopy": true, 00:14:32.810 "get_zone_info": false, 00:14:32.810 "zone_management": false, 00:14:32.810 "zone_append": false, 00:14:32.810 "compare": false, 00:14:32.810 "compare_and_write": false, 00:14:32.810 "abort": true, 00:14:32.810 "seek_hole": false, 00:14:32.810 "seek_data": false, 00:14:32.810 "copy": true, 00:14:32.810 "nvme_iov_md": false 00:14:32.810 }, 00:14:32.810 "memory_domains": [ 00:14:32.810 { 00:14:32.810 "dma_device_id": "system", 00:14:32.810 "dma_device_type": 1 00:14:32.810 }, 00:14:32.810 { 00:14:32.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.810 "dma_device_type": 2 00:14:32.810 } 00:14:32.810 ], 00:14:32.810 "driver_specific": {} 00:14:32.810 } 00:14:32.810 ] 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.810 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.811 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.811 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.811 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.811 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.811 "name": "Existed_Raid", 00:14:32.811 "uuid": "202a9483-06f6-46e1-9114-179bb90ad030", 00:14:32.811 "strip_size_kb": 64, 00:14:32.811 "state": "configuring", 00:14:32.811 "raid_level": "raid5f", 00:14:32.811 "superblock": true, 00:14:32.811 "num_base_bdevs": 3, 00:14:32.811 "num_base_bdevs_discovered": 2, 00:14:32.811 "num_base_bdevs_operational": 3, 00:14:32.811 "base_bdevs_list": [ 00:14:32.811 { 00:14:32.811 "name": "BaseBdev1", 00:14:32.811 "uuid": "ca612afc-63dc-4d63-b65f-8803b10fc5cc", 00:14:32.811 "is_configured": true, 00:14:32.811 "data_offset": 2048, 00:14:32.811 "data_size": 63488 00:14:32.811 }, 00:14:32.811 { 00:14:32.811 "name": "BaseBdev2", 00:14:32.811 "uuid": "1cda0593-0de8-4de2-8cfd-4895e17a4845", 00:14:32.811 "is_configured": true, 00:14:32.811 "data_offset": 2048, 00:14:32.811 "data_size": 63488 00:14:32.811 }, 00:14:32.811 { 00:14:32.811 "name": "BaseBdev3", 00:14:32.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.811 "is_configured": false, 00:14:32.811 "data_offset": 0, 00:14:32.811 "data_size": 0 00:14:32.811 } 00:14:32.811 ] 00:14:32.811 }' 00:14:32.811 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.811 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.380 [2024-11-10 15:23:39.484825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.380 [2024-11-10 15:23:39.485648] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:33.380 [2024-11-10 15:23:39.485710] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.380 BaseBdev3 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.380 [2024-11-10 15:23:39.486771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:33.380 [2024-11-10 15:23:39.488425] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:33.380 [2024-11-10 15:23:39.488491] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:33.380 [2024-11-10 15:23:39.488925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.380 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.380 [ 00:14:33.380 { 00:14:33.380 "name": "BaseBdev3", 00:14:33.380 "aliases": [ 00:14:33.380 "6ba082e5-9838-4a0c-8451-af4fbec768c0" 00:14:33.380 ], 00:14:33.380 "product_name": "Malloc disk", 00:14:33.380 "block_size": 512, 00:14:33.380 "num_blocks": 65536, 00:14:33.380 "uuid": "6ba082e5-9838-4a0c-8451-af4fbec768c0", 00:14:33.380 "assigned_rate_limits": { 00:14:33.380 "rw_ios_per_sec": 0, 00:14:33.380 "rw_mbytes_per_sec": 0, 00:14:33.380 "r_mbytes_per_sec": 0, 00:14:33.380 "w_mbytes_per_sec": 0 00:14:33.380 }, 00:14:33.380 "claimed": true, 00:14:33.380 "claim_type": "exclusive_write", 00:14:33.380 "zoned": false, 00:14:33.380 "supported_io_types": { 00:14:33.380 "read": true, 00:14:33.380 "write": true, 00:14:33.380 "unmap": true, 00:14:33.380 "flush": true, 00:14:33.380 "reset": true, 00:14:33.380 "nvme_admin": false, 00:14:33.380 "nvme_io": false, 00:14:33.380 "nvme_io_md": false, 00:14:33.380 "write_zeroes": true, 00:14:33.380 "zcopy": true, 00:14:33.380 "get_zone_info": false, 00:14:33.380 "zone_management": false, 00:14:33.380 "zone_append": false, 00:14:33.380 "compare": false, 00:14:33.380 "compare_and_write": false, 00:14:33.380 "abort": true, 00:14:33.380 "seek_hole": false, 00:14:33.380 "seek_data": false, 00:14:33.380 "copy": true, 00:14:33.380 "nvme_iov_md": false 00:14:33.380 }, 00:14:33.380 "memory_domains": [ 00:14:33.380 { 00:14:33.380 "dma_device_id": "system", 00:14:33.380 "dma_device_type": 1 00:14:33.380 }, 00:14:33.380 { 00:14:33.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.381 "dma_device_type": 2 00:14:33.381 } 00:14:33.381 ], 00:14:33.381 "driver_specific": {} 00:14:33.381 } 00:14:33.381 ] 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.381 "name": "Existed_Raid", 00:14:33.381 "uuid": "202a9483-06f6-46e1-9114-179bb90ad030", 00:14:33.381 "strip_size_kb": 64, 00:14:33.381 "state": "online", 00:14:33.381 "raid_level": "raid5f", 00:14:33.381 "superblock": true, 00:14:33.381 "num_base_bdevs": 3, 00:14:33.381 "num_base_bdevs_discovered": 3, 00:14:33.381 "num_base_bdevs_operational": 3, 00:14:33.381 "base_bdevs_list": [ 00:14:33.381 { 00:14:33.381 "name": "BaseBdev1", 00:14:33.381 "uuid": "ca612afc-63dc-4d63-b65f-8803b10fc5cc", 00:14:33.381 "is_configured": true, 00:14:33.381 "data_offset": 2048, 00:14:33.381 "data_size": 63488 00:14:33.381 }, 00:14:33.381 { 00:14:33.381 "name": "BaseBdev2", 00:14:33.381 "uuid": "1cda0593-0de8-4de2-8cfd-4895e17a4845", 00:14:33.381 "is_configured": true, 00:14:33.381 "data_offset": 2048, 00:14:33.381 "data_size": 63488 00:14:33.381 }, 00:14:33.381 { 00:14:33.381 "name": "BaseBdev3", 00:14:33.381 "uuid": "6ba082e5-9838-4a0c-8451-af4fbec768c0", 00:14:33.381 "is_configured": true, 00:14:33.381 "data_offset": 2048, 00:14:33.381 "data_size": 63488 00:14:33.381 } 00:14:33.381 ] 00:14:33.381 }' 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.381 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.640 [2024-11-10 15:23:39.929202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.640 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:33.640 "name": "Existed_Raid", 00:14:33.640 "aliases": [ 00:14:33.640 "202a9483-06f6-46e1-9114-179bb90ad030" 00:14:33.640 ], 00:14:33.640 "product_name": "Raid Volume", 00:14:33.640 "block_size": 512, 00:14:33.640 "num_blocks": 126976, 00:14:33.640 "uuid": "202a9483-06f6-46e1-9114-179bb90ad030", 00:14:33.640 "assigned_rate_limits": { 00:14:33.640 "rw_ios_per_sec": 0, 00:14:33.640 "rw_mbytes_per_sec": 0, 00:14:33.640 "r_mbytes_per_sec": 0, 00:14:33.641 "w_mbytes_per_sec": 0 00:14:33.641 }, 00:14:33.641 "claimed": false, 00:14:33.641 "zoned": false, 00:14:33.641 "supported_io_types": { 00:14:33.641 "read": true, 00:14:33.641 "write": true, 00:14:33.641 "unmap": false, 00:14:33.641 "flush": false, 00:14:33.641 "reset": true, 00:14:33.641 "nvme_admin": false, 00:14:33.641 "nvme_io": false, 00:14:33.641 "nvme_io_md": false, 00:14:33.641 "write_zeroes": true, 00:14:33.641 "zcopy": false, 00:14:33.641 "get_zone_info": false, 00:14:33.641 "zone_management": false, 00:14:33.641 "zone_append": false, 00:14:33.641 "compare": false, 00:14:33.641 "compare_and_write": false, 00:14:33.641 "abort": false, 00:14:33.641 "seek_hole": false, 00:14:33.641 "seek_data": false, 00:14:33.641 "copy": false, 00:14:33.641 "nvme_iov_md": false 00:14:33.641 }, 00:14:33.641 "driver_specific": { 00:14:33.641 "raid": { 00:14:33.641 "uuid": "202a9483-06f6-46e1-9114-179bb90ad030", 00:14:33.641 "strip_size_kb": 64, 00:14:33.641 "state": "online", 00:14:33.641 "raid_level": "raid5f", 00:14:33.641 "superblock": true, 00:14:33.641 "num_base_bdevs": 3, 00:14:33.641 "num_base_bdevs_discovered": 3, 00:14:33.641 "num_base_bdevs_operational": 3, 00:14:33.641 "base_bdevs_list": [ 00:14:33.641 { 00:14:33.641 "name": "BaseBdev1", 00:14:33.641 "uuid": "ca612afc-63dc-4d63-b65f-8803b10fc5cc", 00:14:33.641 "is_configured": true, 00:14:33.641 "data_offset": 2048, 00:14:33.641 "data_size": 63488 00:14:33.641 }, 00:14:33.641 { 00:14:33.641 "name": "BaseBdev2", 00:14:33.641 "uuid": "1cda0593-0de8-4de2-8cfd-4895e17a4845", 00:14:33.641 "is_configured": true, 00:14:33.641 "data_offset": 2048, 00:14:33.641 "data_size": 63488 00:14:33.641 }, 00:14:33.641 { 00:14:33.641 "name": "BaseBdev3", 00:14:33.641 "uuid": "6ba082e5-9838-4a0c-8451-af4fbec768c0", 00:14:33.641 "is_configured": true, 00:14:33.641 "data_offset": 2048, 00:14:33.641 "data_size": 63488 00:14:33.641 } 00:14:33.641 ] 00:14:33.641 } 00:14:33.641 } 00:14:33.641 }' 00:14:33.641 15:23:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:33.901 BaseBdev2 00:14:33.901 BaseBdev3' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.901 [2024-11-10 15:23:40.205115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.901 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.161 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.161 "name": "Existed_Raid", 00:14:34.161 "uuid": "202a9483-06f6-46e1-9114-179bb90ad030", 00:14:34.161 "strip_size_kb": 64, 00:14:34.161 "state": "online", 00:14:34.161 "raid_level": "raid5f", 00:14:34.161 "superblock": true, 00:14:34.161 "num_base_bdevs": 3, 00:14:34.161 "num_base_bdevs_discovered": 2, 00:14:34.161 "num_base_bdevs_operational": 2, 00:14:34.161 "base_bdevs_list": [ 00:14:34.161 { 00:14:34.161 "name": null, 00:14:34.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.161 "is_configured": false, 00:14:34.161 "data_offset": 0, 00:14:34.161 "data_size": 63488 00:14:34.161 }, 00:14:34.161 { 00:14:34.161 "name": "BaseBdev2", 00:14:34.161 "uuid": "1cda0593-0de8-4de2-8cfd-4895e17a4845", 00:14:34.161 "is_configured": true, 00:14:34.161 "data_offset": 2048, 00:14:34.161 "data_size": 63488 00:14:34.161 }, 00:14:34.161 { 00:14:34.161 "name": "BaseBdev3", 00:14:34.161 "uuid": "6ba082e5-9838-4a0c-8451-af4fbec768c0", 00:14:34.161 "is_configured": true, 00:14:34.161 "data_offset": 2048, 00:14:34.161 "data_size": 63488 00:14:34.161 } 00:14:34.161 ] 00:14:34.161 }' 00:14:34.161 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.161 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 [2024-11-10 15:23:40.722418] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:34.420 [2024-11-10 15:23:40.722662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.420 [2024-11-10 15:23:40.743312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.420 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.679 [2024-11-10 15:23:40.803420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.679 [2024-11-10 15:23:40.803477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.679 BaseBdev2 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.679 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.679 [ 00:14:34.679 { 00:14:34.679 "name": "BaseBdev2", 00:14:34.679 "aliases": [ 00:14:34.679 "504945ad-4fbb-4f03-a8e0-cebf9d065860" 00:14:34.679 ], 00:14:34.679 "product_name": "Malloc disk", 00:14:34.679 "block_size": 512, 00:14:34.679 "num_blocks": 65536, 00:14:34.680 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:34.680 "assigned_rate_limits": { 00:14:34.680 "rw_ios_per_sec": 0, 00:14:34.680 "rw_mbytes_per_sec": 0, 00:14:34.680 "r_mbytes_per_sec": 0, 00:14:34.680 "w_mbytes_per_sec": 0 00:14:34.680 }, 00:14:34.680 "claimed": false, 00:14:34.680 "zoned": false, 00:14:34.680 "supported_io_types": { 00:14:34.680 "read": true, 00:14:34.680 "write": true, 00:14:34.680 "unmap": true, 00:14:34.680 "flush": true, 00:14:34.680 "reset": true, 00:14:34.680 "nvme_admin": false, 00:14:34.680 "nvme_io": false, 00:14:34.680 "nvme_io_md": false, 00:14:34.680 "write_zeroes": true, 00:14:34.680 "zcopy": true, 00:14:34.680 "get_zone_info": false, 00:14:34.680 "zone_management": false, 00:14:34.680 "zone_append": false, 00:14:34.680 "compare": false, 00:14:34.680 "compare_and_write": false, 00:14:34.680 "abort": true, 00:14:34.680 "seek_hole": false, 00:14:34.680 "seek_data": false, 00:14:34.680 "copy": true, 00:14:34.680 "nvme_iov_md": false 00:14:34.680 }, 00:14:34.680 "memory_domains": [ 00:14:34.680 { 00:14:34.680 "dma_device_id": "system", 00:14:34.680 "dma_device_type": 1 00:14:34.680 }, 00:14:34.680 { 00:14:34.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.680 "dma_device_type": 2 00:14:34.680 } 00:14:34.680 ], 00:14:34.680 "driver_specific": {} 00:14:34.680 } 00:14:34.680 ] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.680 BaseBdev3 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.680 [ 00:14:34.680 { 00:14:34.680 "name": "BaseBdev3", 00:14:34.680 "aliases": [ 00:14:34.680 "b1f5a44c-23c7-417d-ac45-5ad4704985b1" 00:14:34.680 ], 00:14:34.680 "product_name": "Malloc disk", 00:14:34.680 "block_size": 512, 00:14:34.680 "num_blocks": 65536, 00:14:34.680 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:34.680 "assigned_rate_limits": { 00:14:34.680 "rw_ios_per_sec": 0, 00:14:34.680 "rw_mbytes_per_sec": 0, 00:14:34.680 "r_mbytes_per_sec": 0, 00:14:34.680 "w_mbytes_per_sec": 0 00:14:34.680 }, 00:14:34.680 "claimed": false, 00:14:34.680 "zoned": false, 00:14:34.680 "supported_io_types": { 00:14:34.680 "read": true, 00:14:34.680 "write": true, 00:14:34.680 "unmap": true, 00:14:34.680 "flush": true, 00:14:34.680 "reset": true, 00:14:34.680 "nvme_admin": false, 00:14:34.680 "nvme_io": false, 00:14:34.680 "nvme_io_md": false, 00:14:34.680 "write_zeroes": true, 00:14:34.680 "zcopy": true, 00:14:34.680 "get_zone_info": false, 00:14:34.680 "zone_management": false, 00:14:34.680 "zone_append": false, 00:14:34.680 "compare": false, 00:14:34.680 "compare_and_write": false, 00:14:34.680 "abort": true, 00:14:34.680 "seek_hole": false, 00:14:34.680 "seek_data": false, 00:14:34.680 "copy": true, 00:14:34.680 "nvme_iov_md": false 00:14:34.680 }, 00:14:34.680 "memory_domains": [ 00:14:34.680 { 00:14:34.680 "dma_device_id": "system", 00:14:34.680 "dma_device_type": 1 00:14:34.680 }, 00:14:34.680 { 00:14:34.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.680 "dma_device_type": 2 00:14:34.680 } 00:14:34.680 ], 00:14:34.680 "driver_specific": {} 00:14:34.680 } 00:14:34.680 ] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.680 15:23:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.680 [2024-11-10 15:23:41.000450] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.680 [2024-11-10 15:23:41.000573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.680 [2024-11-10 15:23:41.000616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:34.680 [2024-11-10 15:23:41.002755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.680 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.939 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.939 "name": "Existed_Raid", 00:14:34.939 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:34.939 "strip_size_kb": 64, 00:14:34.939 "state": "configuring", 00:14:34.939 "raid_level": "raid5f", 00:14:34.939 "superblock": true, 00:14:34.939 "num_base_bdevs": 3, 00:14:34.939 "num_base_bdevs_discovered": 2, 00:14:34.939 "num_base_bdevs_operational": 3, 00:14:34.939 "base_bdevs_list": [ 00:14:34.939 { 00:14:34.939 "name": "BaseBdev1", 00:14:34.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.939 "is_configured": false, 00:14:34.939 "data_offset": 0, 00:14:34.939 "data_size": 0 00:14:34.939 }, 00:14:34.939 { 00:14:34.939 "name": "BaseBdev2", 00:14:34.939 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:34.939 "is_configured": true, 00:14:34.939 "data_offset": 2048, 00:14:34.939 "data_size": 63488 00:14:34.939 }, 00:14:34.939 { 00:14:34.939 "name": "BaseBdev3", 00:14:34.939 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:34.939 "is_configured": true, 00:14:34.939 "data_offset": 2048, 00:14:34.939 "data_size": 63488 00:14:34.939 } 00:14:34.939 ] 00:14:34.939 }' 00:14:34.939 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.939 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.198 [2024-11-10 15:23:41.424567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.198 "name": "Existed_Raid", 00:14:35.198 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:35.198 "strip_size_kb": 64, 00:14:35.198 "state": "configuring", 00:14:35.198 "raid_level": "raid5f", 00:14:35.198 "superblock": true, 00:14:35.198 "num_base_bdevs": 3, 00:14:35.198 "num_base_bdevs_discovered": 1, 00:14:35.198 "num_base_bdevs_operational": 3, 00:14:35.198 "base_bdevs_list": [ 00:14:35.198 { 00:14:35.198 "name": "BaseBdev1", 00:14:35.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.198 "is_configured": false, 00:14:35.198 "data_offset": 0, 00:14:35.198 "data_size": 0 00:14:35.198 }, 00:14:35.198 { 00:14:35.198 "name": null, 00:14:35.198 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:35.198 "is_configured": false, 00:14:35.198 "data_offset": 0, 00:14:35.198 "data_size": 63488 00:14:35.198 }, 00:14:35.198 { 00:14:35.198 "name": "BaseBdev3", 00:14:35.198 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:35.198 "is_configured": true, 00:14:35.198 "data_offset": 2048, 00:14:35.198 "data_size": 63488 00:14:35.198 } 00:14:35.198 ] 00:14:35.198 }' 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.198 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.767 [2024-11-10 15:23:41.945394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.767 BaseBdev1 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.767 [ 00:14:35.767 { 00:14:35.767 "name": "BaseBdev1", 00:14:35.767 "aliases": [ 00:14:35.767 "4946363e-4103-46bb-9064-7a751f3395f1" 00:14:35.767 ], 00:14:35.767 "product_name": "Malloc disk", 00:14:35.767 "block_size": 512, 00:14:35.767 "num_blocks": 65536, 00:14:35.767 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:35.767 "assigned_rate_limits": { 00:14:35.767 "rw_ios_per_sec": 0, 00:14:35.767 "rw_mbytes_per_sec": 0, 00:14:35.767 "r_mbytes_per_sec": 0, 00:14:35.767 "w_mbytes_per_sec": 0 00:14:35.767 }, 00:14:35.767 "claimed": true, 00:14:35.767 "claim_type": "exclusive_write", 00:14:35.767 "zoned": false, 00:14:35.767 "supported_io_types": { 00:14:35.767 "read": true, 00:14:35.767 "write": true, 00:14:35.767 "unmap": true, 00:14:35.767 "flush": true, 00:14:35.767 "reset": true, 00:14:35.767 "nvme_admin": false, 00:14:35.767 "nvme_io": false, 00:14:35.767 "nvme_io_md": false, 00:14:35.767 "write_zeroes": true, 00:14:35.767 "zcopy": true, 00:14:35.767 "get_zone_info": false, 00:14:35.767 "zone_management": false, 00:14:35.767 "zone_append": false, 00:14:35.767 "compare": false, 00:14:35.767 "compare_and_write": false, 00:14:35.767 "abort": true, 00:14:35.767 "seek_hole": false, 00:14:35.767 "seek_data": false, 00:14:35.767 "copy": true, 00:14:35.767 "nvme_iov_md": false 00:14:35.767 }, 00:14:35.767 "memory_domains": [ 00:14:35.767 { 00:14:35.767 "dma_device_id": "system", 00:14:35.767 "dma_device_type": 1 00:14:35.767 }, 00:14:35.767 { 00:14:35.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.767 "dma_device_type": 2 00:14:35.767 } 00:14:35.767 ], 00:14:35.767 "driver_specific": {} 00:14:35.767 } 00:14:35.767 ] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.767 15:23:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.767 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.767 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.767 "name": "Existed_Raid", 00:14:35.767 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:35.767 "strip_size_kb": 64, 00:14:35.767 "state": "configuring", 00:14:35.767 "raid_level": "raid5f", 00:14:35.767 "superblock": true, 00:14:35.767 "num_base_bdevs": 3, 00:14:35.767 "num_base_bdevs_discovered": 2, 00:14:35.767 "num_base_bdevs_operational": 3, 00:14:35.767 "base_bdevs_list": [ 00:14:35.767 { 00:14:35.767 "name": "BaseBdev1", 00:14:35.767 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:35.767 "is_configured": true, 00:14:35.767 "data_offset": 2048, 00:14:35.767 "data_size": 63488 00:14:35.767 }, 00:14:35.767 { 00:14:35.767 "name": null, 00:14:35.767 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:35.767 "is_configured": false, 00:14:35.767 "data_offset": 0, 00:14:35.767 "data_size": 63488 00:14:35.767 }, 00:14:35.767 { 00:14:35.767 "name": "BaseBdev3", 00:14:35.767 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:35.767 "is_configured": true, 00:14:35.767 "data_offset": 2048, 00:14:35.767 "data_size": 63488 00:14:35.767 } 00:14:35.767 ] 00:14:35.767 }' 00:14:35.767 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.767 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:36.335 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.336 [2024-11-10 15:23:42.449615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.336 "name": "Existed_Raid", 00:14:36.336 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:36.336 "strip_size_kb": 64, 00:14:36.336 "state": "configuring", 00:14:36.336 "raid_level": "raid5f", 00:14:36.336 "superblock": true, 00:14:36.336 "num_base_bdevs": 3, 00:14:36.336 "num_base_bdevs_discovered": 1, 00:14:36.336 "num_base_bdevs_operational": 3, 00:14:36.336 "base_bdevs_list": [ 00:14:36.336 { 00:14:36.336 "name": "BaseBdev1", 00:14:36.336 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:36.336 "is_configured": true, 00:14:36.336 "data_offset": 2048, 00:14:36.336 "data_size": 63488 00:14:36.336 }, 00:14:36.336 { 00:14:36.336 "name": null, 00:14:36.336 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:36.336 "is_configured": false, 00:14:36.336 "data_offset": 0, 00:14:36.336 "data_size": 63488 00:14:36.336 }, 00:14:36.336 { 00:14:36.336 "name": null, 00:14:36.336 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:36.336 "is_configured": false, 00:14:36.336 "data_offset": 0, 00:14:36.336 "data_size": 63488 00:14:36.336 } 00:14:36.336 ] 00:14:36.336 }' 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.336 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.595 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.595 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.595 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.595 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:36.595 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.854 [2024-11-10 15:23:42.969787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.854 15:23:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.854 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.854 "name": "Existed_Raid", 00:14:36.854 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:36.854 "strip_size_kb": 64, 00:14:36.854 "state": "configuring", 00:14:36.854 "raid_level": "raid5f", 00:14:36.854 "superblock": true, 00:14:36.854 "num_base_bdevs": 3, 00:14:36.854 "num_base_bdevs_discovered": 2, 00:14:36.854 "num_base_bdevs_operational": 3, 00:14:36.854 "base_bdevs_list": [ 00:14:36.854 { 00:14:36.854 "name": "BaseBdev1", 00:14:36.854 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:36.854 "is_configured": true, 00:14:36.854 "data_offset": 2048, 00:14:36.854 "data_size": 63488 00:14:36.854 }, 00:14:36.854 { 00:14:36.854 "name": null, 00:14:36.854 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:36.854 "is_configured": false, 00:14:36.854 "data_offset": 0, 00:14:36.854 "data_size": 63488 00:14:36.854 }, 00:14:36.854 { 00:14:36.854 "name": "BaseBdev3", 00:14:36.854 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:36.854 "is_configured": true, 00:14:36.854 "data_offset": 2048, 00:14:36.854 "data_size": 63488 00:14:36.854 } 00:14:36.854 ] 00:14:36.854 }' 00:14:36.854 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.854 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.113 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:37.113 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.114 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.114 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.114 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.372 [2024-11-10 15:23:43.489943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.372 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.372 "name": "Existed_Raid", 00:14:37.372 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:37.372 "strip_size_kb": 64, 00:14:37.372 "state": "configuring", 00:14:37.372 "raid_level": "raid5f", 00:14:37.373 "superblock": true, 00:14:37.373 "num_base_bdevs": 3, 00:14:37.373 "num_base_bdevs_discovered": 1, 00:14:37.373 "num_base_bdevs_operational": 3, 00:14:37.373 "base_bdevs_list": [ 00:14:37.373 { 00:14:37.373 "name": null, 00:14:37.373 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:37.373 "is_configured": false, 00:14:37.373 "data_offset": 0, 00:14:37.373 "data_size": 63488 00:14:37.373 }, 00:14:37.373 { 00:14:37.373 "name": null, 00:14:37.373 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:37.373 "is_configured": false, 00:14:37.373 "data_offset": 0, 00:14:37.373 "data_size": 63488 00:14:37.373 }, 00:14:37.373 { 00:14:37.373 "name": "BaseBdev3", 00:14:37.373 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:37.373 "is_configured": true, 00:14:37.373 "data_offset": 2048, 00:14:37.373 "data_size": 63488 00:14:37.373 } 00:14:37.373 ] 00:14:37.373 }' 00:14:37.373 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.373 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.631 [2024-11-10 15:23:43.969936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.631 15:23:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.890 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.890 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.890 "name": "Existed_Raid", 00:14:37.890 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:37.890 "strip_size_kb": 64, 00:14:37.890 "state": "configuring", 00:14:37.890 "raid_level": "raid5f", 00:14:37.890 "superblock": true, 00:14:37.890 "num_base_bdevs": 3, 00:14:37.890 "num_base_bdevs_discovered": 2, 00:14:37.890 "num_base_bdevs_operational": 3, 00:14:37.890 "base_bdevs_list": [ 00:14:37.890 { 00:14:37.890 "name": null, 00:14:37.890 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:37.890 "is_configured": false, 00:14:37.890 "data_offset": 0, 00:14:37.890 "data_size": 63488 00:14:37.890 }, 00:14:37.890 { 00:14:37.890 "name": "BaseBdev2", 00:14:37.890 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:37.890 "is_configured": true, 00:14:37.890 "data_offset": 2048, 00:14:37.890 "data_size": 63488 00:14:37.890 }, 00:14:37.890 { 00:14:37.890 "name": "BaseBdev3", 00:14:37.890 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:37.890 "is_configured": true, 00:14:37.890 "data_offset": 2048, 00:14:37.890 "data_size": 63488 00:14:37.890 } 00:14:37.890 ] 00:14:37.890 }' 00:14:37.890 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.890 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.149 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4946363e-4103-46bb-9064-7a751f3395f1 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 [2024-11-10 15:23:44.554918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:38.409 [2024-11-10 15:23:44.555261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:38.409 [2024-11-10 15:23:44.555317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:38.409 [2024-11-10 15:23:44.555663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:14:38.409 NewBaseBdev 00:14:38.409 [2024-11-10 15:23:44.556187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:38.409 [2024-11-10 15:23:44.556214] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:38.409 [2024-11-10 15:23:44.556329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 [ 00:14:38.409 { 00:14:38.409 "name": "NewBaseBdev", 00:14:38.409 "aliases": [ 00:14:38.409 "4946363e-4103-46bb-9064-7a751f3395f1" 00:14:38.409 ], 00:14:38.409 "product_name": "Malloc disk", 00:14:38.409 "block_size": 512, 00:14:38.409 "num_blocks": 65536, 00:14:38.409 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:38.409 "assigned_rate_limits": { 00:14:38.409 "rw_ios_per_sec": 0, 00:14:38.409 "rw_mbytes_per_sec": 0, 00:14:38.409 "r_mbytes_per_sec": 0, 00:14:38.409 "w_mbytes_per_sec": 0 00:14:38.409 }, 00:14:38.409 "claimed": true, 00:14:38.409 "claim_type": "exclusive_write", 00:14:38.409 "zoned": false, 00:14:38.409 "supported_io_types": { 00:14:38.409 "read": true, 00:14:38.409 "write": true, 00:14:38.409 "unmap": true, 00:14:38.409 "flush": true, 00:14:38.409 "reset": true, 00:14:38.409 "nvme_admin": false, 00:14:38.409 "nvme_io": false, 00:14:38.409 "nvme_io_md": false, 00:14:38.409 "write_zeroes": true, 00:14:38.409 "zcopy": true, 00:14:38.409 "get_zone_info": false, 00:14:38.409 "zone_management": false, 00:14:38.409 "zone_append": false, 00:14:38.409 "compare": false, 00:14:38.409 "compare_and_write": false, 00:14:38.409 "abort": true, 00:14:38.409 "seek_hole": false, 00:14:38.409 "seek_data": false, 00:14:38.409 "copy": true, 00:14:38.409 "nvme_iov_md": false 00:14:38.409 }, 00:14:38.409 "memory_domains": [ 00:14:38.409 { 00:14:38.409 "dma_device_id": "system", 00:14:38.409 "dma_device_type": 1 00:14:38.409 }, 00:14:38.409 { 00:14:38.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.409 "dma_device_type": 2 00:14:38.409 } 00:14:38.409 ], 00:14:38.409 "driver_specific": {} 00:14:38.409 } 00:14:38.409 ] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.409 "name": "Existed_Raid", 00:14:38.409 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:38.409 "strip_size_kb": 64, 00:14:38.409 "state": "online", 00:14:38.409 "raid_level": "raid5f", 00:14:38.409 "superblock": true, 00:14:38.409 "num_base_bdevs": 3, 00:14:38.409 "num_base_bdevs_discovered": 3, 00:14:38.409 "num_base_bdevs_operational": 3, 00:14:38.409 "base_bdevs_list": [ 00:14:38.409 { 00:14:38.409 "name": "NewBaseBdev", 00:14:38.409 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:38.409 "is_configured": true, 00:14:38.409 "data_offset": 2048, 00:14:38.409 "data_size": 63488 00:14:38.409 }, 00:14:38.409 { 00:14:38.409 "name": "BaseBdev2", 00:14:38.409 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:38.409 "is_configured": true, 00:14:38.409 "data_offset": 2048, 00:14:38.409 "data_size": 63488 00:14:38.409 }, 00:14:38.409 { 00:14:38.409 "name": "BaseBdev3", 00:14:38.409 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:38.409 "is_configured": true, 00:14:38.409 "data_offset": 2048, 00:14:38.409 "data_size": 63488 00:14:38.409 } 00:14:38.409 ] 00:14:38.409 }' 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.409 15:23:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:38.977 [2024-11-10 15:23:45.079411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.977 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:38.977 "name": "Existed_Raid", 00:14:38.977 "aliases": [ 00:14:38.977 "26357096-47bd-4496-bfd4-f057336f6878" 00:14:38.977 ], 00:14:38.977 "product_name": "Raid Volume", 00:14:38.977 "block_size": 512, 00:14:38.977 "num_blocks": 126976, 00:14:38.977 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:38.977 "assigned_rate_limits": { 00:14:38.977 "rw_ios_per_sec": 0, 00:14:38.977 "rw_mbytes_per_sec": 0, 00:14:38.977 "r_mbytes_per_sec": 0, 00:14:38.977 "w_mbytes_per_sec": 0 00:14:38.977 }, 00:14:38.977 "claimed": false, 00:14:38.977 "zoned": false, 00:14:38.977 "supported_io_types": { 00:14:38.977 "read": true, 00:14:38.977 "write": true, 00:14:38.977 "unmap": false, 00:14:38.977 "flush": false, 00:14:38.977 "reset": true, 00:14:38.977 "nvme_admin": false, 00:14:38.977 "nvme_io": false, 00:14:38.977 "nvme_io_md": false, 00:14:38.977 "write_zeroes": true, 00:14:38.977 "zcopy": false, 00:14:38.977 "get_zone_info": false, 00:14:38.977 "zone_management": false, 00:14:38.977 "zone_append": false, 00:14:38.977 "compare": false, 00:14:38.977 "compare_and_write": false, 00:14:38.977 "abort": false, 00:14:38.977 "seek_hole": false, 00:14:38.977 "seek_data": false, 00:14:38.977 "copy": false, 00:14:38.977 "nvme_iov_md": false 00:14:38.977 }, 00:14:38.977 "driver_specific": { 00:14:38.977 "raid": { 00:14:38.977 "uuid": "26357096-47bd-4496-bfd4-f057336f6878", 00:14:38.977 "strip_size_kb": 64, 00:14:38.977 "state": "online", 00:14:38.977 "raid_level": "raid5f", 00:14:38.977 "superblock": true, 00:14:38.977 "num_base_bdevs": 3, 00:14:38.977 "num_base_bdevs_discovered": 3, 00:14:38.977 "num_base_bdevs_operational": 3, 00:14:38.977 "base_bdevs_list": [ 00:14:38.977 { 00:14:38.978 "name": "NewBaseBdev", 00:14:38.978 "uuid": "4946363e-4103-46bb-9064-7a751f3395f1", 00:14:38.978 "is_configured": true, 00:14:38.978 "data_offset": 2048, 00:14:38.978 "data_size": 63488 00:14:38.978 }, 00:14:38.978 { 00:14:38.978 "name": "BaseBdev2", 00:14:38.978 "uuid": "504945ad-4fbb-4f03-a8e0-cebf9d065860", 00:14:38.978 "is_configured": true, 00:14:38.978 "data_offset": 2048, 00:14:38.978 "data_size": 63488 00:14:38.978 }, 00:14:38.978 { 00:14:38.978 "name": "BaseBdev3", 00:14:38.978 "uuid": "b1f5a44c-23c7-417d-ac45-5ad4704985b1", 00:14:38.978 "is_configured": true, 00:14:38.978 "data_offset": 2048, 00:14:38.978 "data_size": 63488 00:14:38.978 } 00:14:38.978 ] 00:14:38.978 } 00:14:38.978 } 00:14:38.978 }' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:38.978 BaseBdev2 00:14:38.978 BaseBdev3' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.978 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.237 [2024-11-10 15:23:45.363201] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:39.237 [2024-11-10 15:23:45.363237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:39.237 [2024-11-10 15:23:45.363330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:39.237 [2024-11-10 15:23:45.363648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:39.237 [2024-11-10 15:23:45.363659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 92447 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 92447 ']' 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 92447 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 92447 00:14:39.237 killing process with pid 92447 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 92447' 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 92447 00:14:39.237 [2024-11-10 15:23:45.411428] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:39.237 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 92447 00:14:39.237 [2024-11-10 15:23:45.469449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:39.497 15:23:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:39.497 00:14:39.497 real 0m9.149s 00:14:39.497 user 0m15.270s 00:14:39.497 sys 0m2.063s 00:14:39.497 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:39.497 15:23:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.497 ************************************ 00:14:39.497 END TEST raid5f_state_function_test_sb 00:14:39.497 ************************************ 00:14:39.497 15:23:45 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:39.497 15:23:45 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:14:39.497 15:23:45 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:39.497 15:23:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:39.756 ************************************ 00:14:39.756 START TEST raid5f_superblock_test 00:14:39.756 ************************************ 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 3 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=93051 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 93051 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 93051 ']' 00:14:39.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:39.756 15:23:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.756 [2024-11-10 15:23:45.952327] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:14:39.756 [2024-11-10 15:23:45.952530] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93051 ] 00:14:39.756 [2024-11-10 15:23:46.082602] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:39.756 [2024-11-10 15:23:46.101039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.015 [2024-11-10 15:23:46.142316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.015 [2024-11-10 15:23:46.218450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.015 [2024-11-10 15:23:46.218485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.583 malloc1 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.583 [2024-11-10 15:23:46.797548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:40.583 [2024-11-10 15:23:46.797711] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.583 [2024-11-10 15:23:46.797756] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:40.583 [2024-11-10 15:23:46.797791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.583 [2024-11-10 15:23:46.800304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.583 [2024-11-10 15:23:46.800377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:40.583 pt1 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.583 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 malloc2 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 [2024-11-10 15:23:46.836237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:40.584 [2024-11-10 15:23:46.836368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.584 [2024-11-10 15:23:46.836406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:40.584 [2024-11-10 15:23:46.836436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.584 [2024-11-10 15:23:46.838803] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.584 [2024-11-10 15:23:46.838873] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:40.584 pt2 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 malloc3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 [2024-11-10 15:23:46.870828] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:40.584 [2024-11-10 15:23:46.870931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.584 [2024-11-10 15:23:46.870986] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:40.584 [2024-11-10 15:23:46.871029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.584 [2024-11-10 15:23:46.873406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.584 [2024-11-10 15:23:46.873473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:40.584 pt3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 [2024-11-10 15:23:46.882864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:40.584 [2024-11-10 15:23:46.885054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:40.584 [2024-11-10 15:23:46.885153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:40.584 [2024-11-10 15:23:46.885348] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:40.584 [2024-11-10 15:23:46.885397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:40.584 [2024-11-10 15:23:46.885679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:40.584 [2024-11-10 15:23:46.886178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:40.584 [2024-11-10 15:23:46.886225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:40.584 [2024-11-10 15:23:46.886376] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.584 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.584 "name": "raid_bdev1", 00:14:40.584 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:40.584 "strip_size_kb": 64, 00:14:40.584 "state": "online", 00:14:40.584 "raid_level": "raid5f", 00:14:40.584 "superblock": true, 00:14:40.584 "num_base_bdevs": 3, 00:14:40.584 "num_base_bdevs_discovered": 3, 00:14:40.584 "num_base_bdevs_operational": 3, 00:14:40.584 "base_bdevs_list": [ 00:14:40.584 { 00:14:40.584 "name": "pt1", 00:14:40.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.584 "is_configured": true, 00:14:40.584 "data_offset": 2048, 00:14:40.584 "data_size": 63488 00:14:40.584 }, 00:14:40.584 { 00:14:40.584 "name": "pt2", 00:14:40.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.584 "is_configured": true, 00:14:40.584 "data_offset": 2048, 00:14:40.584 "data_size": 63488 00:14:40.584 }, 00:14:40.584 { 00:14:40.584 "name": "pt3", 00:14:40.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.584 "is_configured": true, 00:14:40.584 "data_offset": 2048, 00:14:40.584 "data_size": 63488 00:14:40.584 } 00:14:40.584 ] 00:14:40.584 }' 00:14:40.898 15:23:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.898 15:23:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:41.158 [2024-11-10 15:23:47.272982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:41.158 "name": "raid_bdev1", 00:14:41.158 "aliases": [ 00:14:41.158 "fdcb2f4d-5f88-4970-8371-368b4f1c3f72" 00:14:41.158 ], 00:14:41.158 "product_name": "Raid Volume", 00:14:41.158 "block_size": 512, 00:14:41.158 "num_blocks": 126976, 00:14:41.158 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:41.158 "assigned_rate_limits": { 00:14:41.158 "rw_ios_per_sec": 0, 00:14:41.158 "rw_mbytes_per_sec": 0, 00:14:41.158 "r_mbytes_per_sec": 0, 00:14:41.158 "w_mbytes_per_sec": 0 00:14:41.158 }, 00:14:41.158 "claimed": false, 00:14:41.158 "zoned": false, 00:14:41.158 "supported_io_types": { 00:14:41.158 "read": true, 00:14:41.158 "write": true, 00:14:41.158 "unmap": false, 00:14:41.158 "flush": false, 00:14:41.158 "reset": true, 00:14:41.158 "nvme_admin": false, 00:14:41.158 "nvme_io": false, 00:14:41.158 "nvme_io_md": false, 00:14:41.158 "write_zeroes": true, 00:14:41.158 "zcopy": false, 00:14:41.158 "get_zone_info": false, 00:14:41.158 "zone_management": false, 00:14:41.158 "zone_append": false, 00:14:41.158 "compare": false, 00:14:41.158 "compare_and_write": false, 00:14:41.158 "abort": false, 00:14:41.158 "seek_hole": false, 00:14:41.158 "seek_data": false, 00:14:41.158 "copy": false, 00:14:41.158 "nvme_iov_md": false 00:14:41.158 }, 00:14:41.158 "driver_specific": { 00:14:41.158 "raid": { 00:14:41.158 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:41.158 "strip_size_kb": 64, 00:14:41.158 "state": "online", 00:14:41.158 "raid_level": "raid5f", 00:14:41.158 "superblock": true, 00:14:41.158 "num_base_bdevs": 3, 00:14:41.158 "num_base_bdevs_discovered": 3, 00:14:41.158 "num_base_bdevs_operational": 3, 00:14:41.158 "base_bdevs_list": [ 00:14:41.158 { 00:14:41.158 "name": "pt1", 00:14:41.158 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.158 "is_configured": true, 00:14:41.158 "data_offset": 2048, 00:14:41.158 "data_size": 63488 00:14:41.158 }, 00:14:41.158 { 00:14:41.158 "name": "pt2", 00:14:41.158 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.158 "is_configured": true, 00:14:41.158 "data_offset": 2048, 00:14:41.158 "data_size": 63488 00:14:41.158 }, 00:14:41.158 { 00:14:41.158 "name": "pt3", 00:14:41.158 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.158 "is_configured": true, 00:14:41.158 "data_offset": 2048, 00:14:41.158 "data_size": 63488 00:14:41.158 } 00:14:41.158 ] 00:14:41.158 } 00:14:41.158 } 00:14:41.158 }' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:41.158 pt2 00:14:41.158 pt3' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.158 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:41.419 [2024-11-10 15:23:47.581019] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fdcb2f4d-5f88-4970-8371-368b4f1c3f72 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fdcb2f4d-5f88-4970-8371-368b4f1c3f72 ']' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 [2024-11-10 15:23:47.624811] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.419 [2024-11-10 15:23:47.624846] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.419 [2024-11-10 15:23:47.624931] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.419 [2024-11-10 15:23:47.625012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.419 [2024-11-10 15:23:47.625034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.419 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 [2024-11-10 15:23:47.780908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:41.680 [2024-11-10 15:23:47.783115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:41.680 [2024-11-10 15:23:47.783167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:41.680 [2024-11-10 15:23:47.783217] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:41.680 [2024-11-10 15:23:47.783273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:41.680 [2024-11-10 15:23:47.783293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:41.680 [2024-11-10 15:23:47.783308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.680 [2024-11-10 15:23:47.783317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:14:41.680 request: 00:14:41.680 { 00:14:41.680 "name": "raid_bdev1", 00:14:41.680 "raid_level": "raid5f", 00:14:41.680 "base_bdevs": [ 00:14:41.680 "malloc1", 00:14:41.680 "malloc2", 00:14:41.680 "malloc3" 00:14:41.680 ], 00:14:41.680 "strip_size_kb": 64, 00:14:41.680 "superblock": false, 00:14:41.680 "method": "bdev_raid_create", 00:14:41.680 "req_id": 1 00:14:41.680 } 00:14:41.680 Got JSON-RPC error response 00:14:41.680 response: 00:14:41.680 { 00:14:41.680 "code": -17, 00:14:41.680 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:41.680 } 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 [2024-11-10 15:23:47.848870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:41.680 [2024-11-10 15:23:47.848973] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.680 [2024-11-10 15:23:47.849009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:41.680 [2024-11-10 15:23:47.849050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.680 [2024-11-10 15:23:47.851527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.680 [2024-11-10 15:23:47.851597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:41.680 [2024-11-10 15:23:47.851690] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:41.680 [2024-11-10 15:23:47.851743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:41.680 pt1 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.680 "name": "raid_bdev1", 00:14:41.680 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:41.680 "strip_size_kb": 64, 00:14:41.680 "state": "configuring", 00:14:41.680 "raid_level": "raid5f", 00:14:41.680 "superblock": true, 00:14:41.680 "num_base_bdevs": 3, 00:14:41.680 "num_base_bdevs_discovered": 1, 00:14:41.680 "num_base_bdevs_operational": 3, 00:14:41.680 "base_bdevs_list": [ 00:14:41.680 { 00:14:41.680 "name": "pt1", 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.680 "is_configured": true, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 }, 00:14:41.680 { 00:14:41.680 "name": null, 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.680 "is_configured": false, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 }, 00:14:41.680 { 00:14:41.680 "name": null, 00:14:41.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.680 "is_configured": false, 00:14:41.680 "data_offset": 2048, 00:14:41.680 "data_size": 63488 00:14:41.680 } 00:14:41.680 ] 00:14:41.680 }' 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.680 15:23:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.941 [2024-11-10 15:23:48.285042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.941 [2024-11-10 15:23:48.285117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.941 [2024-11-10 15:23:48.285149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:41.941 [2024-11-10 15:23:48.285158] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.941 [2024-11-10 15:23:48.285608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.941 [2024-11-10 15:23:48.285637] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.941 [2024-11-10 15:23:48.285725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:41.941 [2024-11-10 15:23:48.285750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.941 pt2 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.941 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.941 [2024-11-10 15:23:48.297092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.201 "name": "raid_bdev1", 00:14:42.201 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:42.201 "strip_size_kb": 64, 00:14:42.201 "state": "configuring", 00:14:42.201 "raid_level": "raid5f", 00:14:42.201 "superblock": true, 00:14:42.201 "num_base_bdevs": 3, 00:14:42.201 "num_base_bdevs_discovered": 1, 00:14:42.201 "num_base_bdevs_operational": 3, 00:14:42.201 "base_bdevs_list": [ 00:14:42.201 { 00:14:42.201 "name": "pt1", 00:14:42.201 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.201 "is_configured": true, 00:14:42.201 "data_offset": 2048, 00:14:42.201 "data_size": 63488 00:14:42.201 }, 00:14:42.201 { 00:14:42.201 "name": null, 00:14:42.201 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.201 "is_configured": false, 00:14:42.201 "data_offset": 0, 00:14:42.201 "data_size": 63488 00:14:42.201 }, 00:14:42.201 { 00:14:42.201 "name": null, 00:14:42.201 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.201 "is_configured": false, 00:14:42.201 "data_offset": 2048, 00:14:42.201 "data_size": 63488 00:14:42.201 } 00:14:42.201 ] 00:14:42.201 }' 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.201 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.461 [2024-11-10 15:23:48.765217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:42.461 [2024-11-10 15:23:48.765390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.461 [2024-11-10 15:23:48.765426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:42.461 [2024-11-10 15:23:48.765473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.461 [2024-11-10 15:23:48.765973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.461 [2024-11-10 15:23:48.766051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:42.461 [2024-11-10 15:23:48.766178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:42.461 [2024-11-10 15:23:48.766239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:42.461 pt2 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.461 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.462 [2024-11-10 15:23:48.777151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:42.462 [2024-11-10 15:23:48.777260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:42.462 [2024-11-10 15:23:48.777289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:42.462 [2024-11-10 15:23:48.777328] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:42.462 [2024-11-10 15:23:48.777735] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:42.462 [2024-11-10 15:23:48.777791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:42.462 [2024-11-10 15:23:48.777876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:42.462 [2024-11-10 15:23:48.777924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:42.462 [2024-11-10 15:23:48.778066] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:42.462 [2024-11-10 15:23:48.778110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:42.462 [2024-11-10 15:23:48.778374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:42.462 [2024-11-10 15:23:48.778831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:42.462 [2024-11-10 15:23:48.778877] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:42.462 [2024-11-10 15:23:48.779040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.462 pt3 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.462 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.722 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.722 "name": "raid_bdev1", 00:14:42.722 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:42.722 "strip_size_kb": 64, 00:14:42.722 "state": "online", 00:14:42.722 "raid_level": "raid5f", 00:14:42.722 "superblock": true, 00:14:42.722 "num_base_bdevs": 3, 00:14:42.722 "num_base_bdevs_discovered": 3, 00:14:42.722 "num_base_bdevs_operational": 3, 00:14:42.722 "base_bdevs_list": [ 00:14:42.722 { 00:14:42.722 "name": "pt1", 00:14:42.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.722 "is_configured": true, 00:14:42.722 "data_offset": 2048, 00:14:42.722 "data_size": 63488 00:14:42.722 }, 00:14:42.722 { 00:14:42.722 "name": "pt2", 00:14:42.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.722 "is_configured": true, 00:14:42.722 "data_offset": 2048, 00:14:42.722 "data_size": 63488 00:14:42.722 }, 00:14:42.722 { 00:14:42.722 "name": "pt3", 00:14:42.722 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.722 "is_configured": true, 00:14:42.722 "data_offset": 2048, 00:14:42.722 "data_size": 63488 00:14:42.722 } 00:14:42.722 ] 00:14:42.722 }' 00:14:42.722 15:23:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.722 15:23:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.982 [2024-11-10 15:23:49.237539] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:42.982 "name": "raid_bdev1", 00:14:42.982 "aliases": [ 00:14:42.982 "fdcb2f4d-5f88-4970-8371-368b4f1c3f72" 00:14:42.982 ], 00:14:42.982 "product_name": "Raid Volume", 00:14:42.982 "block_size": 512, 00:14:42.982 "num_blocks": 126976, 00:14:42.982 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:42.982 "assigned_rate_limits": { 00:14:42.982 "rw_ios_per_sec": 0, 00:14:42.982 "rw_mbytes_per_sec": 0, 00:14:42.982 "r_mbytes_per_sec": 0, 00:14:42.982 "w_mbytes_per_sec": 0 00:14:42.982 }, 00:14:42.982 "claimed": false, 00:14:42.982 "zoned": false, 00:14:42.982 "supported_io_types": { 00:14:42.982 "read": true, 00:14:42.982 "write": true, 00:14:42.982 "unmap": false, 00:14:42.982 "flush": false, 00:14:42.982 "reset": true, 00:14:42.982 "nvme_admin": false, 00:14:42.982 "nvme_io": false, 00:14:42.982 "nvme_io_md": false, 00:14:42.982 "write_zeroes": true, 00:14:42.982 "zcopy": false, 00:14:42.982 "get_zone_info": false, 00:14:42.982 "zone_management": false, 00:14:42.982 "zone_append": false, 00:14:42.982 "compare": false, 00:14:42.982 "compare_and_write": false, 00:14:42.982 "abort": false, 00:14:42.982 "seek_hole": false, 00:14:42.982 "seek_data": false, 00:14:42.982 "copy": false, 00:14:42.982 "nvme_iov_md": false 00:14:42.982 }, 00:14:42.982 "driver_specific": { 00:14:42.982 "raid": { 00:14:42.982 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:42.982 "strip_size_kb": 64, 00:14:42.982 "state": "online", 00:14:42.982 "raid_level": "raid5f", 00:14:42.982 "superblock": true, 00:14:42.982 "num_base_bdevs": 3, 00:14:42.982 "num_base_bdevs_discovered": 3, 00:14:42.982 "num_base_bdevs_operational": 3, 00:14:42.982 "base_bdevs_list": [ 00:14:42.982 { 00:14:42.982 "name": "pt1", 00:14:42.982 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.982 "is_configured": true, 00:14:42.982 "data_offset": 2048, 00:14:42.982 "data_size": 63488 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "name": "pt2", 00:14:42.982 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.982 "is_configured": true, 00:14:42.982 "data_offset": 2048, 00:14:42.982 "data_size": 63488 00:14:42.982 }, 00:14:42.982 { 00:14:42.982 "name": "pt3", 00:14:42.982 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.982 "is_configured": true, 00:14:42.982 "data_offset": 2048, 00:14:42.982 "data_size": 63488 00:14:42.982 } 00:14:42.982 ] 00:14:42.982 } 00:14:42.982 } 00:14:42.982 }' 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:42.982 pt2 00:14:42.982 pt3' 00:14:42.982 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 [2024-11-10 15:23:49.493593] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fdcb2f4d-5f88-4970-8371-368b4f1c3f72 '!=' fdcb2f4d-5f88-4970-8371-368b4f1c3f72 ']' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 [2024-11-10 15:23:49.541467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.243 "name": "raid_bdev1", 00:14:43.243 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:43.243 "strip_size_kb": 64, 00:14:43.243 "state": "online", 00:14:43.243 "raid_level": "raid5f", 00:14:43.243 "superblock": true, 00:14:43.243 "num_base_bdevs": 3, 00:14:43.243 "num_base_bdevs_discovered": 2, 00:14:43.243 "num_base_bdevs_operational": 2, 00:14:43.243 "base_bdevs_list": [ 00:14:43.243 { 00:14:43.243 "name": null, 00:14:43.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.243 "is_configured": false, 00:14:43.243 "data_offset": 0, 00:14:43.243 "data_size": 63488 00:14:43.243 }, 00:14:43.243 { 00:14:43.243 "name": "pt2", 00:14:43.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.243 "is_configured": true, 00:14:43.243 "data_offset": 2048, 00:14:43.243 "data_size": 63488 00:14:43.243 }, 00:14:43.243 { 00:14:43.243 "name": "pt3", 00:14:43.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.243 "is_configured": true, 00:14:43.243 "data_offset": 2048, 00:14:43.243 "data_size": 63488 00:14:43.243 } 00:14:43.243 ] 00:14:43.243 }' 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.243 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 [2024-11-10 15:23:49.921510] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.813 [2024-11-10 15:23:49.921621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.813 [2024-11-10 15:23:49.921751] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.813 [2024-11-10 15:23:49.921846] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.813 [2024-11-10 15:23:49.921945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.813 15:23:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.813 [2024-11-10 15:23:50.009508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.813 [2024-11-10 15:23:50.009619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.813 [2024-11-10 15:23:50.009643] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:43.813 [2024-11-10 15:23:50.009655] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.813 [2024-11-10 15:23:50.012186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.813 [2024-11-10 15:23:50.012225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.813 [2024-11-10 15:23:50.012298] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:43.813 [2024-11-10 15:23:50.012342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.813 pt2 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.813 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.814 "name": "raid_bdev1", 00:14:43.814 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:43.814 "strip_size_kb": 64, 00:14:43.814 "state": "configuring", 00:14:43.814 "raid_level": "raid5f", 00:14:43.814 "superblock": true, 00:14:43.814 "num_base_bdevs": 3, 00:14:43.814 "num_base_bdevs_discovered": 1, 00:14:43.814 "num_base_bdevs_operational": 2, 00:14:43.814 "base_bdevs_list": [ 00:14:43.814 { 00:14:43.814 "name": null, 00:14:43.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.814 "is_configured": false, 00:14:43.814 "data_offset": 2048, 00:14:43.814 "data_size": 63488 00:14:43.814 }, 00:14:43.814 { 00:14:43.814 "name": "pt2", 00:14:43.814 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.814 "is_configured": true, 00:14:43.814 "data_offset": 2048, 00:14:43.814 "data_size": 63488 00:14:43.814 }, 00:14:43.814 { 00:14:43.814 "name": null, 00:14:43.814 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.814 "is_configured": false, 00:14:43.814 "data_offset": 2048, 00:14:43.814 "data_size": 63488 00:14:43.814 } 00:14:43.814 ] 00:14:43.814 }' 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.814 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.384 [2024-11-10 15:23:50.517686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.384 [2024-11-10 15:23:50.517838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.384 [2024-11-10 15:23:50.517876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:44.384 [2024-11-10 15:23:50.517910] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.384 [2024-11-10 15:23:50.518426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.384 [2024-11-10 15:23:50.518497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.384 [2024-11-10 15:23:50.518625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:44.384 [2024-11-10 15:23:50.518693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.384 [2024-11-10 15:23:50.518830] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:44.384 [2024-11-10 15:23:50.518870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.384 [2024-11-10 15:23:50.519158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:44.384 [2024-11-10 15:23:50.519716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:44.384 [2024-11-10 15:23:50.519766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:44.384 [2024-11-10 15:23:50.520081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.384 pt3 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.384 "name": "raid_bdev1", 00:14:44.384 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:44.384 "strip_size_kb": 64, 00:14:44.384 "state": "online", 00:14:44.384 "raid_level": "raid5f", 00:14:44.384 "superblock": true, 00:14:44.384 "num_base_bdevs": 3, 00:14:44.384 "num_base_bdevs_discovered": 2, 00:14:44.384 "num_base_bdevs_operational": 2, 00:14:44.384 "base_bdevs_list": [ 00:14:44.384 { 00:14:44.384 "name": null, 00:14:44.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.384 "is_configured": false, 00:14:44.384 "data_offset": 2048, 00:14:44.384 "data_size": 63488 00:14:44.384 }, 00:14:44.384 { 00:14:44.384 "name": "pt2", 00:14:44.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.384 "is_configured": true, 00:14:44.384 "data_offset": 2048, 00:14:44.384 "data_size": 63488 00:14:44.384 }, 00:14:44.384 { 00:14:44.384 "name": "pt3", 00:14:44.384 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.384 "is_configured": true, 00:14:44.384 "data_offset": 2048, 00:14:44.384 "data_size": 63488 00:14:44.384 } 00:14:44.384 ] 00:14:44.384 }' 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.384 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.644 [2024-11-10 15:23:50.978375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.644 [2024-11-10 15:23:50.978476] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:44.644 [2024-11-10 15:23:50.978603] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.644 [2024-11-10 15:23:50.978706] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.644 [2024-11-10 15:23:50.978786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.644 15:23:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.904 [2024-11-10 15:23:51.054345] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:44.904 [2024-11-10 15:23:51.054471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.904 [2024-11-10 15:23:51.054507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:44.904 [2024-11-10 15:23:51.054553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.904 [2024-11-10 15:23:51.057210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.904 [2024-11-10 15:23:51.057299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:44.904 [2024-11-10 15:23:51.057430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:44.904 [2024-11-10 15:23:51.057497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:44.904 [2024-11-10 15:23:51.057640] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:44.904 [2024-11-10 15:23:51.057693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:44.904 [2024-11-10 15:23:51.057743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:14:44.904 [2024-11-10 15:23:51.057829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:44.904 pt1 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.904 "name": "raid_bdev1", 00:14:44.904 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:44.904 "strip_size_kb": 64, 00:14:44.904 "state": "configuring", 00:14:44.904 "raid_level": "raid5f", 00:14:44.904 "superblock": true, 00:14:44.904 "num_base_bdevs": 3, 00:14:44.904 "num_base_bdevs_discovered": 1, 00:14:44.904 "num_base_bdevs_operational": 2, 00:14:44.904 "base_bdevs_list": [ 00:14:44.904 { 00:14:44.904 "name": null, 00:14:44.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.904 "is_configured": false, 00:14:44.904 "data_offset": 2048, 00:14:44.904 "data_size": 63488 00:14:44.904 }, 00:14:44.904 { 00:14:44.904 "name": "pt2", 00:14:44.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.904 "is_configured": true, 00:14:44.904 "data_offset": 2048, 00:14:44.904 "data_size": 63488 00:14:44.904 }, 00:14:44.904 { 00:14:44.904 "name": null, 00:14:44.904 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.904 "is_configured": false, 00:14:44.904 "data_offset": 2048, 00:14:44.904 "data_size": 63488 00:14:44.904 } 00:14:44.904 ] 00:14:44.904 }' 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.904 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.475 [2024-11-10 15:23:51.582540] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:45.475 [2024-11-10 15:23:51.582699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.475 [2024-11-10 15:23:51.582743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:45.475 [2024-11-10 15:23:51.582772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.475 [2024-11-10 15:23:51.583312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.475 [2024-11-10 15:23:51.583392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:45.475 [2024-11-10 15:23:51.583518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:45.475 [2024-11-10 15:23:51.583575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:45.475 [2024-11-10 15:23:51.583718] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:45.475 [2024-11-10 15:23:51.583756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:45.475 [2024-11-10 15:23:51.584092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:45.475 [2024-11-10 15:23:51.584718] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:45.475 [2024-11-10 15:23:51.584778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:45.475 [2024-11-10 15:23:51.585055] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.475 pt3 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.475 "name": "raid_bdev1", 00:14:45.475 "uuid": "fdcb2f4d-5f88-4970-8371-368b4f1c3f72", 00:14:45.475 "strip_size_kb": 64, 00:14:45.475 "state": "online", 00:14:45.475 "raid_level": "raid5f", 00:14:45.475 "superblock": true, 00:14:45.475 "num_base_bdevs": 3, 00:14:45.475 "num_base_bdevs_discovered": 2, 00:14:45.475 "num_base_bdevs_operational": 2, 00:14:45.475 "base_bdevs_list": [ 00:14:45.475 { 00:14:45.475 "name": null, 00:14:45.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.475 "is_configured": false, 00:14:45.475 "data_offset": 2048, 00:14:45.475 "data_size": 63488 00:14:45.475 }, 00:14:45.475 { 00:14:45.475 "name": "pt2", 00:14:45.475 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:45.475 "is_configured": true, 00:14:45.475 "data_offset": 2048, 00:14:45.475 "data_size": 63488 00:14:45.475 }, 00:14:45.475 { 00:14:45.475 "name": "pt3", 00:14:45.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:45.475 "is_configured": true, 00:14:45.475 "data_offset": 2048, 00:14:45.475 "data_size": 63488 00:14:45.475 } 00:14:45.475 ] 00:14:45.475 }' 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.475 15:23:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.735 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:45.735 [2024-11-10 15:23:52.079540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' fdcb2f4d-5f88-4970-8371-368b4f1c3f72 '!=' fdcb2f4d-5f88-4970-8371-368b4f1c3f72 ']' 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 93051 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 93051 ']' 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 93051 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93051 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:45.996 killing process with pid 93051 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93051' 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 93051 00:14:45.996 [2024-11-10 15:23:52.152589] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:45.996 [2024-11-10 15:23:52.152726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.996 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 93051 00:14:45.996 [2024-11-10 15:23:52.152797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:45.996 [2024-11-10 15:23:52.152811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:14:45.996 [2024-11-10 15:23:52.214354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.256 15:23:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:46.256 00:14:46.256 real 0m6.636s 00:14:46.256 user 0m10.984s 00:14:46.256 sys 0m1.472s 00:14:46.256 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.256 15:23:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.256 ************************************ 00:14:46.256 END TEST raid5f_superblock_test 00:14:46.256 ************************************ 00:14:46.256 15:23:52 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:46.256 15:23:52 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:46.256 15:23:52 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:46.256 15:23:52 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.256 15:23:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.256 ************************************ 00:14:46.256 START TEST raid5f_rebuild_test 00:14:46.256 ************************************ 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 false false true 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=93479 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 93479 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 93479 ']' 00:14:46.256 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.257 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.257 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.257 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.257 15:23:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.517 [2024-11-10 15:23:52.677075] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:14:46.517 [2024-11-10 15:23:52.677285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:46.517 Zero copy mechanism will not be used. 00:14:46.517 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93479 ] 00:14:46.517 [2024-11-10 15:23:52.810328] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:46.517 [2024-11-10 15:23:52.849703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.517 [2024-11-10 15:23:52.874425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.777 [2024-11-10 15:23:52.917809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.777 [2024-11-10 15:23:52.917933] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.347 BaseBdev1_malloc 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.347 [2024-11-10 15:23:53.509722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.347 [2024-11-10 15:23:53.509801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.347 [2024-11-10 15:23:53.509826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.347 [2024-11-10 15:23:53.509843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.347 [2024-11-10 15:23:53.511996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.347 [2024-11-10 15:23:53.512053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.347 BaseBdev1 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.347 BaseBdev2_malloc 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:47.347 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 [2024-11-10 15:23:53.538428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:47.348 [2024-11-10 15:23:53.538486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.348 [2024-11-10 15:23:53.538504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:47.348 [2024-11-10 15:23:53.538516] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.348 [2024-11-10 15:23:53.540531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.348 [2024-11-10 15:23:53.540577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:47.348 BaseBdev2 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 BaseBdev3_malloc 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 [2024-11-10 15:23:53.567162] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:47.348 [2024-11-10 15:23:53.567218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.348 [2024-11-10 15:23:53.567237] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:47.348 [2024-11-10 15:23:53.567250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.348 [2024-11-10 15:23:53.569240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.348 [2024-11-10 15:23:53.569348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:47.348 BaseBdev3 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 spare_malloc 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 spare_delay 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 [2024-11-10 15:23:53.625868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.348 [2024-11-10 15:23:53.625933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.348 [2024-11-10 15:23:53.625956] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:47.348 [2024-11-10 15:23:53.625973] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.348 [2024-11-10 15:23:53.628384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.348 [2024-11-10 15:23:53.628435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.348 spare 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 [2024-11-10 15:23:53.637916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.348 [2024-11-10 15:23:53.639747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.348 [2024-11-10 15:23:53.639812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.348 [2024-11-10 15:23:53.639902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:14:47.348 [2024-11-10 15:23:53.639918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:47.348 [2024-11-10 15:23:53.640221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:47.348 [2024-11-10 15:23:53.640655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:14:47.348 [2024-11-10 15:23:53.640671] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:14:47.348 [2024-11-10 15:23:53.640804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.348 "name": "raid_bdev1", 00:14:47.348 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:47.348 "strip_size_kb": 64, 00:14:47.348 "state": "online", 00:14:47.348 "raid_level": "raid5f", 00:14:47.348 "superblock": false, 00:14:47.348 "num_base_bdevs": 3, 00:14:47.348 "num_base_bdevs_discovered": 3, 00:14:47.348 "num_base_bdevs_operational": 3, 00:14:47.348 "base_bdevs_list": [ 00:14:47.348 { 00:14:47.348 "name": "BaseBdev1", 00:14:47.348 "uuid": "a8c2e98a-0434-5f1e-9b02-0ee9994d1e5a", 00:14:47.348 "is_configured": true, 00:14:47.348 "data_offset": 0, 00:14:47.348 "data_size": 65536 00:14:47.348 }, 00:14:47.348 { 00:14:47.348 "name": "BaseBdev2", 00:14:47.348 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:47.348 "is_configured": true, 00:14:47.348 "data_offset": 0, 00:14:47.348 "data_size": 65536 00:14:47.348 }, 00:14:47.348 { 00:14:47.348 "name": "BaseBdev3", 00:14:47.348 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:47.348 "is_configured": true, 00:14:47.348 "data_offset": 0, 00:14:47.348 "data_size": 65536 00:14:47.348 } 00:14:47.348 ] 00:14:47.348 }' 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.348 15:23:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.918 [2024-11-10 15:23:54.122468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:47.918 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.919 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:48.179 [2024-11-10 15:23:54.374428] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:14:48.179 /dev/nbd0 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.179 1+0 records in 00:14:48.179 1+0 records out 00:14:48.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283947 s, 14.4 MB/s 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:48.179 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:48.439 512+0 records in 00:14:48.439 512+0 records out 00:14:48.439 67108864 bytes (67 MB, 64 MiB) copied, 0.288 s, 233 MB/s 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.439 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.699 [2024-11-10 15:23:54.929211] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.699 [2024-11-10 15:23:54.942789] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.699 "name": "raid_bdev1", 00:14:48.699 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:48.699 "strip_size_kb": 64, 00:14:48.699 "state": "online", 00:14:48.699 "raid_level": "raid5f", 00:14:48.699 "superblock": false, 00:14:48.699 "num_base_bdevs": 3, 00:14:48.699 "num_base_bdevs_discovered": 2, 00:14:48.699 "num_base_bdevs_operational": 2, 00:14:48.699 "base_bdevs_list": [ 00:14:48.699 { 00:14:48.699 "name": null, 00:14:48.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.699 "is_configured": false, 00:14:48.699 "data_offset": 0, 00:14:48.699 "data_size": 65536 00:14:48.699 }, 00:14:48.699 { 00:14:48.699 "name": "BaseBdev2", 00:14:48.699 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:48.699 "is_configured": true, 00:14:48.699 "data_offset": 0, 00:14:48.699 "data_size": 65536 00:14:48.699 }, 00:14:48.699 { 00:14:48.699 "name": "BaseBdev3", 00:14:48.699 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:48.699 "is_configured": true, 00:14:48.699 "data_offset": 0, 00:14:48.699 "data_size": 65536 00:14:48.699 } 00:14:48.699 ] 00:14:48.699 }' 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.699 15:23:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.269 15:23:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.269 15:23:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.269 15:23:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.269 [2024-11-10 15:23:55.354910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.269 [2024-11-10 15:23:55.359769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ba90 00:14:49.269 15:23:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.269 15:23:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:49.269 [2024-11-10 15:23:55.361935] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.210 "name": "raid_bdev1", 00:14:50.210 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:50.210 "strip_size_kb": 64, 00:14:50.210 "state": "online", 00:14:50.210 "raid_level": "raid5f", 00:14:50.210 "superblock": false, 00:14:50.210 "num_base_bdevs": 3, 00:14:50.210 "num_base_bdevs_discovered": 3, 00:14:50.210 "num_base_bdevs_operational": 3, 00:14:50.210 "process": { 00:14:50.210 "type": "rebuild", 00:14:50.210 "target": "spare", 00:14:50.210 "progress": { 00:14:50.210 "blocks": 20480, 00:14:50.210 "percent": 15 00:14:50.210 } 00:14:50.210 }, 00:14:50.210 "base_bdevs_list": [ 00:14:50.210 { 00:14:50.210 "name": "spare", 00:14:50.210 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:50.210 "is_configured": true, 00:14:50.210 "data_offset": 0, 00:14:50.210 "data_size": 65536 00:14:50.210 }, 00:14:50.210 { 00:14:50.210 "name": "BaseBdev2", 00:14:50.210 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:50.210 "is_configured": true, 00:14:50.210 "data_offset": 0, 00:14:50.210 "data_size": 65536 00:14:50.210 }, 00:14:50.210 { 00:14:50.210 "name": "BaseBdev3", 00:14:50.210 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:50.210 "is_configured": true, 00:14:50.210 "data_offset": 0, 00:14:50.210 "data_size": 65536 00:14:50.210 } 00:14:50.210 ] 00:14:50.210 }' 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.210 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.210 [2024-11-10 15:23:56.520210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.471 [2024-11-10 15:23:56.571032] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.471 [2024-11-10 15:23:56.571088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.471 [2024-11-10 15:23:56.571109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.471 [2024-11-10 15:23:56.571120] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.471 "name": "raid_bdev1", 00:14:50.471 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:50.471 "strip_size_kb": 64, 00:14:50.471 "state": "online", 00:14:50.471 "raid_level": "raid5f", 00:14:50.471 "superblock": false, 00:14:50.471 "num_base_bdevs": 3, 00:14:50.471 "num_base_bdevs_discovered": 2, 00:14:50.471 "num_base_bdevs_operational": 2, 00:14:50.471 "base_bdevs_list": [ 00:14:50.471 { 00:14:50.471 "name": null, 00:14:50.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.471 "is_configured": false, 00:14:50.471 "data_offset": 0, 00:14:50.471 "data_size": 65536 00:14:50.471 }, 00:14:50.471 { 00:14:50.471 "name": "BaseBdev2", 00:14:50.471 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:50.471 "is_configured": true, 00:14:50.471 "data_offset": 0, 00:14:50.471 "data_size": 65536 00:14:50.471 }, 00:14:50.471 { 00:14:50.471 "name": "BaseBdev3", 00:14:50.471 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:50.471 "is_configured": true, 00:14:50.471 "data_offset": 0, 00:14:50.471 "data_size": 65536 00:14:50.471 } 00:14:50.471 ] 00:14:50.471 }' 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.471 15:23:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.732 "name": "raid_bdev1", 00:14:50.732 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:50.732 "strip_size_kb": 64, 00:14:50.732 "state": "online", 00:14:50.732 "raid_level": "raid5f", 00:14:50.732 "superblock": false, 00:14:50.732 "num_base_bdevs": 3, 00:14:50.732 "num_base_bdevs_discovered": 2, 00:14:50.732 "num_base_bdevs_operational": 2, 00:14:50.732 "base_bdevs_list": [ 00:14:50.732 { 00:14:50.732 "name": null, 00:14:50.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.732 "is_configured": false, 00:14:50.732 "data_offset": 0, 00:14:50.732 "data_size": 65536 00:14:50.732 }, 00:14:50.732 { 00:14:50.732 "name": "BaseBdev2", 00:14:50.732 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:50.732 "is_configured": true, 00:14:50.732 "data_offset": 0, 00:14:50.732 "data_size": 65536 00:14:50.732 }, 00:14:50.732 { 00:14:50.732 "name": "BaseBdev3", 00:14:50.732 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:50.732 "is_configured": true, 00:14:50.732 "data_offset": 0, 00:14:50.732 "data_size": 65536 00:14:50.732 } 00:14:50.732 ] 00:14:50.732 }' 00:14:50.732 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.992 [2024-11-10 15:23:57.172972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:50.992 [2024-11-10 15:23:57.177504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.992 15:23:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:50.992 [2024-11-10 15:23:57.179659] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.932 "name": "raid_bdev1", 00:14:51.932 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:51.932 "strip_size_kb": 64, 00:14:51.932 "state": "online", 00:14:51.932 "raid_level": "raid5f", 00:14:51.932 "superblock": false, 00:14:51.932 "num_base_bdevs": 3, 00:14:51.932 "num_base_bdevs_discovered": 3, 00:14:51.932 "num_base_bdevs_operational": 3, 00:14:51.932 "process": { 00:14:51.932 "type": "rebuild", 00:14:51.932 "target": "spare", 00:14:51.932 "progress": { 00:14:51.932 "blocks": 20480, 00:14:51.932 "percent": 15 00:14:51.932 } 00:14:51.932 }, 00:14:51.932 "base_bdevs_list": [ 00:14:51.932 { 00:14:51.932 "name": "spare", 00:14:51.932 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:51.932 "is_configured": true, 00:14:51.932 "data_offset": 0, 00:14:51.932 "data_size": 65536 00:14:51.932 }, 00:14:51.932 { 00:14:51.932 "name": "BaseBdev2", 00:14:51.932 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:51.932 "is_configured": true, 00:14:51.932 "data_offset": 0, 00:14:51.932 "data_size": 65536 00:14:51.932 }, 00:14:51.932 { 00:14:51.932 "name": "BaseBdev3", 00:14:51.932 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:51.932 "is_configured": true, 00:14:51.932 "data_offset": 0, 00:14:51.932 "data_size": 65536 00:14:51.932 } 00:14:51.932 ] 00:14:51.932 }' 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.932 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=452 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.192 "name": "raid_bdev1", 00:14:52.192 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:52.192 "strip_size_kb": 64, 00:14:52.192 "state": "online", 00:14:52.192 "raid_level": "raid5f", 00:14:52.192 "superblock": false, 00:14:52.192 "num_base_bdevs": 3, 00:14:52.192 "num_base_bdevs_discovered": 3, 00:14:52.192 "num_base_bdevs_operational": 3, 00:14:52.192 "process": { 00:14:52.192 "type": "rebuild", 00:14:52.192 "target": "spare", 00:14:52.192 "progress": { 00:14:52.192 "blocks": 22528, 00:14:52.192 "percent": 17 00:14:52.192 } 00:14:52.192 }, 00:14:52.192 "base_bdevs_list": [ 00:14:52.192 { 00:14:52.192 "name": "spare", 00:14:52.192 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:52.192 "is_configured": true, 00:14:52.192 "data_offset": 0, 00:14:52.192 "data_size": 65536 00:14:52.192 }, 00:14:52.192 { 00:14:52.192 "name": "BaseBdev2", 00:14:52.192 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:52.192 "is_configured": true, 00:14:52.192 "data_offset": 0, 00:14:52.192 "data_size": 65536 00:14:52.192 }, 00:14:52.192 { 00:14:52.192 "name": "BaseBdev3", 00:14:52.192 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:52.192 "is_configured": true, 00:14:52.192 "data_offset": 0, 00:14:52.192 "data_size": 65536 00:14:52.192 } 00:14:52.192 ] 00:14:52.192 }' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.192 15:23:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.573 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.573 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.574 "name": "raid_bdev1", 00:14:53.574 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:53.574 "strip_size_kb": 64, 00:14:53.574 "state": "online", 00:14:53.574 "raid_level": "raid5f", 00:14:53.574 "superblock": false, 00:14:53.574 "num_base_bdevs": 3, 00:14:53.574 "num_base_bdevs_discovered": 3, 00:14:53.574 "num_base_bdevs_operational": 3, 00:14:53.574 "process": { 00:14:53.574 "type": "rebuild", 00:14:53.574 "target": "spare", 00:14:53.574 "progress": { 00:14:53.574 "blocks": 47104, 00:14:53.574 "percent": 35 00:14:53.574 } 00:14:53.574 }, 00:14:53.574 "base_bdevs_list": [ 00:14:53.574 { 00:14:53.574 "name": "spare", 00:14:53.574 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:53.574 "is_configured": true, 00:14:53.574 "data_offset": 0, 00:14:53.574 "data_size": 65536 00:14:53.574 }, 00:14:53.574 { 00:14:53.574 "name": "BaseBdev2", 00:14:53.574 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:53.574 "is_configured": true, 00:14:53.574 "data_offset": 0, 00:14:53.574 "data_size": 65536 00:14:53.574 }, 00:14:53.574 { 00:14:53.574 "name": "BaseBdev3", 00:14:53.574 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:53.574 "is_configured": true, 00:14:53.574 "data_offset": 0, 00:14:53.574 "data_size": 65536 00:14:53.574 } 00:14:53.574 ] 00:14:53.574 }' 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.574 15:23:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.513 "name": "raid_bdev1", 00:14:54.513 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:54.513 "strip_size_kb": 64, 00:14:54.513 "state": "online", 00:14:54.513 "raid_level": "raid5f", 00:14:54.513 "superblock": false, 00:14:54.513 "num_base_bdevs": 3, 00:14:54.513 "num_base_bdevs_discovered": 3, 00:14:54.513 "num_base_bdevs_operational": 3, 00:14:54.513 "process": { 00:14:54.513 "type": "rebuild", 00:14:54.513 "target": "spare", 00:14:54.513 "progress": { 00:14:54.513 "blocks": 69632, 00:14:54.513 "percent": 53 00:14:54.513 } 00:14:54.513 }, 00:14:54.513 "base_bdevs_list": [ 00:14:54.513 { 00:14:54.513 "name": "spare", 00:14:54.513 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:54.513 "is_configured": true, 00:14:54.513 "data_offset": 0, 00:14:54.513 "data_size": 65536 00:14:54.513 }, 00:14:54.513 { 00:14:54.513 "name": "BaseBdev2", 00:14:54.513 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:54.513 "is_configured": true, 00:14:54.513 "data_offset": 0, 00:14:54.513 "data_size": 65536 00:14:54.513 }, 00:14:54.513 { 00:14:54.513 "name": "BaseBdev3", 00:14:54.513 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:54.513 "is_configured": true, 00:14:54.513 "data_offset": 0, 00:14:54.513 "data_size": 65536 00:14:54.513 } 00:14:54.513 ] 00:14:54.513 }' 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.513 15:24:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.453 15:24:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.713 15:24:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.713 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.713 "name": "raid_bdev1", 00:14:55.713 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:55.713 "strip_size_kb": 64, 00:14:55.713 "state": "online", 00:14:55.713 "raid_level": "raid5f", 00:14:55.713 "superblock": false, 00:14:55.713 "num_base_bdevs": 3, 00:14:55.713 "num_base_bdevs_discovered": 3, 00:14:55.713 "num_base_bdevs_operational": 3, 00:14:55.714 "process": { 00:14:55.714 "type": "rebuild", 00:14:55.714 "target": "spare", 00:14:55.714 "progress": { 00:14:55.714 "blocks": 92160, 00:14:55.714 "percent": 70 00:14:55.714 } 00:14:55.714 }, 00:14:55.714 "base_bdevs_list": [ 00:14:55.714 { 00:14:55.714 "name": "spare", 00:14:55.714 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:55.714 "is_configured": true, 00:14:55.714 "data_offset": 0, 00:14:55.714 "data_size": 65536 00:14:55.714 }, 00:14:55.714 { 00:14:55.714 "name": "BaseBdev2", 00:14:55.714 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:55.714 "is_configured": true, 00:14:55.714 "data_offset": 0, 00:14:55.714 "data_size": 65536 00:14:55.714 }, 00:14:55.714 { 00:14:55.714 "name": "BaseBdev3", 00:14:55.714 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:55.714 "is_configured": true, 00:14:55.714 "data_offset": 0, 00:14:55.714 "data_size": 65536 00:14:55.714 } 00:14:55.714 ] 00:14:55.714 }' 00:14:55.714 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.714 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.714 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.714 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.714 15:24:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.654 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.654 "name": "raid_bdev1", 00:14:56.654 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:56.654 "strip_size_kb": 64, 00:14:56.654 "state": "online", 00:14:56.654 "raid_level": "raid5f", 00:14:56.654 "superblock": false, 00:14:56.654 "num_base_bdevs": 3, 00:14:56.654 "num_base_bdevs_discovered": 3, 00:14:56.654 "num_base_bdevs_operational": 3, 00:14:56.654 "process": { 00:14:56.654 "type": "rebuild", 00:14:56.654 "target": "spare", 00:14:56.654 "progress": { 00:14:56.654 "blocks": 116736, 00:14:56.654 "percent": 89 00:14:56.654 } 00:14:56.654 }, 00:14:56.654 "base_bdevs_list": [ 00:14:56.654 { 00:14:56.654 "name": "spare", 00:14:56.654 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:56.654 "is_configured": true, 00:14:56.654 "data_offset": 0, 00:14:56.654 "data_size": 65536 00:14:56.654 }, 00:14:56.654 { 00:14:56.654 "name": "BaseBdev2", 00:14:56.654 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:56.654 "is_configured": true, 00:14:56.654 "data_offset": 0, 00:14:56.654 "data_size": 65536 00:14:56.655 }, 00:14:56.655 { 00:14:56.655 "name": "BaseBdev3", 00:14:56.655 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:56.655 "is_configured": true, 00:14:56.655 "data_offset": 0, 00:14:56.655 "data_size": 65536 00:14:56.655 } 00:14:56.655 ] 00:14:56.655 }' 00:14:56.655 15:24:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.915 15:24:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.915 15:24:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.915 15:24:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.915 15:24:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.484 [2024-11-10 15:24:03.624701] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:57.484 [2024-11-10 15:24:03.624824] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:57.484 [2024-11-10 15:24:03.624893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.743 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.003 "name": "raid_bdev1", 00:14:58.003 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:58.003 "strip_size_kb": 64, 00:14:58.003 "state": "online", 00:14:58.003 "raid_level": "raid5f", 00:14:58.003 "superblock": false, 00:14:58.003 "num_base_bdevs": 3, 00:14:58.003 "num_base_bdevs_discovered": 3, 00:14:58.003 "num_base_bdevs_operational": 3, 00:14:58.003 "base_bdevs_list": [ 00:14:58.003 { 00:14:58.003 "name": "spare", 00:14:58.003 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:58.003 "is_configured": true, 00:14:58.003 "data_offset": 0, 00:14:58.003 "data_size": 65536 00:14:58.003 }, 00:14:58.003 { 00:14:58.003 "name": "BaseBdev2", 00:14:58.003 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:58.003 "is_configured": true, 00:14:58.003 "data_offset": 0, 00:14:58.003 "data_size": 65536 00:14:58.003 }, 00:14:58.003 { 00:14:58.003 "name": "BaseBdev3", 00:14:58.003 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:58.003 "is_configured": true, 00:14:58.003 "data_offset": 0, 00:14:58.003 "data_size": 65536 00:14:58.003 } 00:14:58.003 ] 00:14:58.003 }' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.003 "name": "raid_bdev1", 00:14:58.003 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:58.003 "strip_size_kb": 64, 00:14:58.003 "state": "online", 00:14:58.003 "raid_level": "raid5f", 00:14:58.003 "superblock": false, 00:14:58.003 "num_base_bdevs": 3, 00:14:58.003 "num_base_bdevs_discovered": 3, 00:14:58.003 "num_base_bdevs_operational": 3, 00:14:58.003 "base_bdevs_list": [ 00:14:58.003 { 00:14:58.003 "name": "spare", 00:14:58.003 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:58.003 "is_configured": true, 00:14:58.003 "data_offset": 0, 00:14:58.003 "data_size": 65536 00:14:58.003 }, 00:14:58.003 { 00:14:58.003 "name": "BaseBdev2", 00:14:58.003 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:58.003 "is_configured": true, 00:14:58.003 "data_offset": 0, 00:14:58.003 "data_size": 65536 00:14:58.003 }, 00:14:58.003 { 00:14:58.003 "name": "BaseBdev3", 00:14:58.003 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:58.003 "is_configured": true, 00:14:58.003 "data_offset": 0, 00:14:58.003 "data_size": 65536 00:14:58.003 } 00:14:58.003 ] 00:14:58.003 }' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.003 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.263 "name": "raid_bdev1", 00:14:58.263 "uuid": "7909dce2-fa39-44f4-bbdd-db9da10a206f", 00:14:58.263 "strip_size_kb": 64, 00:14:58.263 "state": "online", 00:14:58.263 "raid_level": "raid5f", 00:14:58.263 "superblock": false, 00:14:58.263 "num_base_bdevs": 3, 00:14:58.263 "num_base_bdevs_discovered": 3, 00:14:58.263 "num_base_bdevs_operational": 3, 00:14:58.263 "base_bdevs_list": [ 00:14:58.263 { 00:14:58.263 "name": "spare", 00:14:58.263 "uuid": "8aa196e1-1f1f-50b0-8781-5072e4572507", 00:14:58.263 "is_configured": true, 00:14:58.263 "data_offset": 0, 00:14:58.263 "data_size": 65536 00:14:58.263 }, 00:14:58.263 { 00:14:58.263 "name": "BaseBdev2", 00:14:58.263 "uuid": "fc53bcb1-3fbd-5e8d-911c-0175d1ef5bb8", 00:14:58.263 "is_configured": true, 00:14:58.263 "data_offset": 0, 00:14:58.263 "data_size": 65536 00:14:58.263 }, 00:14:58.263 { 00:14:58.263 "name": "BaseBdev3", 00:14:58.263 "uuid": "52ef73f1-eda8-5731-b64e-789446ba7e75", 00:14:58.263 "is_configured": true, 00:14:58.263 "data_offset": 0, 00:14:58.263 "data_size": 65536 00:14:58.263 } 00:14:58.263 ] 00:14:58.263 }' 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.263 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.523 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.523 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.523 [2024-11-10 15:24:04.858517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.523 [2024-11-10 15:24:04.858597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.523 [2024-11-10 15:24:04.858684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.523 [2024-11-10 15:24:04.858778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.523 [2024-11-10 15:24:04.858794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:14:58.523 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.523 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.523 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.524 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.524 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.524 15:24:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.785 15:24:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:58.785 /dev/nbd0 00:14:58.785 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.045 1+0 records in 00:14:59.045 1+0 records out 00:14:59.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392537 s, 10.4 MB/s 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:59.045 /dev/nbd1 00:14:59.045 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.305 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.306 1+0 records in 00:14:59.306 1+0 records out 00:14:59.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604983 s, 6.8 MB/s 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.306 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 93479 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 93479 ']' 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 93479 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:59.566 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93479 00:14:59.826 killing process with pid 93479 00:14:59.826 Received shutdown signal, test time was about 60.000000 seconds 00:14:59.826 00:14:59.826 Latency(us) 00:14:59.826 [2024-11-10T15:24:06.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.826 [2024-11-10T15:24:06.189Z] =================================================================================================================== 00:14:59.826 [2024-11-10T15:24:06.189Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:59.826 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:59.826 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:59.826 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93479' 00:14:59.826 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 93479 00:14:59.826 [2024-11-10 15:24:05.959842] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.826 15:24:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 93479 00:14:59.826 [2024-11-10 15:24:06.035519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:00.086 15:24:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:00.086 ************************************ 00:15:00.086 END TEST raid5f_rebuild_test 00:15:00.086 ************************************ 00:15:00.086 00:15:00.086 real 0m13.768s 00:15:00.086 user 0m17.228s 00:15:00.086 sys 0m1.972s 00:15:00.086 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:00.086 15:24:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.086 15:24:06 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:00.086 15:24:06 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:00.086 15:24:06 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:00.086 15:24:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:00.086 ************************************ 00:15:00.086 START TEST raid5f_rebuild_test_sb 00:15:00.086 ************************************ 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 3 true false true 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:00.087 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=93907 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 93907 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 93907 ']' 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:00.347 15:24:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.347 [2024-11-10 15:24:06.540843] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:15:00.347 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:00.347 Zero copy mechanism will not be used. 00:15:00.347 [2024-11-10 15:24:06.541057] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93907 ] 00:15:00.347 [2024-11-10 15:24:06.679733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:00.607 [2024-11-10 15:24:06.719403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.607 [2024-11-10 15:24:06.760117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.607 [2024-11-10 15:24:06.835671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:00.607 [2024-11-10 15:24:06.835711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.177 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:01.177 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 BaseBdev1_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 [2024-11-10 15:24:07.374436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:01.178 [2024-11-10 15:24:07.374525] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.178 [2024-11-10 15:24:07.374555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:01.178 [2024-11-10 15:24:07.374569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.178 [2024-11-10 15:24:07.377064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.178 [2024-11-10 15:24:07.377100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:01.178 BaseBdev1 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 BaseBdev2_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 [2024-11-10 15:24:07.408841] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:01.178 [2024-11-10 15:24:07.408954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.178 [2024-11-10 15:24:07.409007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:01.178 [2024-11-10 15:24:07.409050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.178 [2024-11-10 15:24:07.411476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.178 [2024-11-10 15:24:07.411552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:01.178 BaseBdev2 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 BaseBdev3_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 [2024-11-10 15:24:07.443269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:01.178 [2024-11-10 15:24:07.443373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.178 [2024-11-10 15:24:07.443427] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:01.178 [2024-11-10 15:24:07.443459] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.178 [2024-11-10 15:24:07.445820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.178 [2024-11-10 15:24:07.445894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:01.178 BaseBdev3 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 spare_malloc 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 spare_delay 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 [2024-11-10 15:24:07.503466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:01.178 [2024-11-10 15:24:07.503530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.178 [2024-11-10 15:24:07.503550] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:01.178 [2024-11-10 15:24:07.503563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.178 [2024-11-10 15:24:07.506187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.178 [2024-11-10 15:24:07.506229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:01.178 spare 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 [2024-11-10 15:24:07.515524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:01.178 [2024-11-10 15:24:07.517620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:01.178 [2024-11-10 15:24:07.517736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.178 [2024-11-10 15:24:07.517908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:01.178 [2024-11-10 15:24:07.517926] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:01.178 [2024-11-10 15:24:07.518209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:01.178 [2024-11-10 15:24:07.518660] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:01.178 [2024-11-10 15:24:07.518679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:01.178 [2024-11-10 15:24:07.518800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.178 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.437 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.437 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.437 "name": "raid_bdev1", 00:15:01.437 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:01.437 "strip_size_kb": 64, 00:15:01.437 "state": "online", 00:15:01.437 "raid_level": "raid5f", 00:15:01.437 "superblock": true, 00:15:01.437 "num_base_bdevs": 3, 00:15:01.437 "num_base_bdevs_discovered": 3, 00:15:01.437 "num_base_bdevs_operational": 3, 00:15:01.437 "base_bdevs_list": [ 00:15:01.437 { 00:15:01.437 "name": "BaseBdev1", 00:15:01.437 "uuid": "04348757-3a53-5667-b20e-9f9edf7cba98", 00:15:01.437 "is_configured": true, 00:15:01.437 "data_offset": 2048, 00:15:01.437 "data_size": 63488 00:15:01.437 }, 00:15:01.437 { 00:15:01.437 "name": "BaseBdev2", 00:15:01.437 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:01.437 "is_configured": true, 00:15:01.437 "data_offset": 2048, 00:15:01.437 "data_size": 63488 00:15:01.437 }, 00:15:01.437 { 00:15:01.437 "name": "BaseBdev3", 00:15:01.437 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:01.437 "is_configured": true, 00:15:01.437 "data_offset": 2048, 00:15:01.437 "data_size": 63488 00:15:01.437 } 00:15:01.437 ] 00:15:01.437 }' 00:15:01.437 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.438 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.697 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:01.697 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:01.697 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.697 15:24:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.697 [2024-11-10 15:24:07.997398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.697 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.697 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:01.697 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.697 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.697 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.697 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:01.957 [2024-11-10 15:24:08.249403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:15:01.957 /dev/nbd0 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:01.957 1+0 records in 00:15:01.957 1+0 records out 00:15:01.957 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327376 s, 12.5 MB/s 00:15:01.957 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:02.217 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:02.477 496+0 records in 00:15:02.477 496+0 records out 00:15:02.477 65011712 bytes (65 MB, 62 MiB) copied, 0.304504 s, 214 MB/s 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:02.477 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:02.737 [2024-11-10 15:24:08.844135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.737 [2024-11-10 15:24:08.878958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.737 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.737 "name": "raid_bdev1", 00:15:02.737 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:02.737 "strip_size_kb": 64, 00:15:02.738 "state": "online", 00:15:02.738 "raid_level": "raid5f", 00:15:02.738 "superblock": true, 00:15:02.738 "num_base_bdevs": 3, 00:15:02.738 "num_base_bdevs_discovered": 2, 00:15:02.738 "num_base_bdevs_operational": 2, 00:15:02.738 "base_bdevs_list": [ 00:15:02.738 { 00:15:02.738 "name": null, 00:15:02.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.738 "is_configured": false, 00:15:02.738 "data_offset": 0, 00:15:02.738 "data_size": 63488 00:15:02.738 }, 00:15:02.738 { 00:15:02.738 "name": "BaseBdev2", 00:15:02.738 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:02.738 "is_configured": true, 00:15:02.738 "data_offset": 2048, 00:15:02.738 "data_size": 63488 00:15:02.738 }, 00:15:02.738 { 00:15:02.738 "name": "BaseBdev3", 00:15:02.738 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:02.738 "is_configured": true, 00:15:02.738 "data_offset": 2048, 00:15:02.738 "data_size": 63488 00:15:02.738 } 00:15:02.738 ] 00:15:02.738 }' 00:15:02.738 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.738 15:24:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.998 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:02.998 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.998 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.998 [2024-11-10 15:24:09.327099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:02.998 [2024-11-10 15:24:09.335036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029390 00:15:02.998 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.998 15:24:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:02.998 [2024-11-10 15:24:09.337595] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.378 "name": "raid_bdev1", 00:15:04.378 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:04.378 "strip_size_kb": 64, 00:15:04.378 "state": "online", 00:15:04.378 "raid_level": "raid5f", 00:15:04.378 "superblock": true, 00:15:04.378 "num_base_bdevs": 3, 00:15:04.378 "num_base_bdevs_discovered": 3, 00:15:04.378 "num_base_bdevs_operational": 3, 00:15:04.378 "process": { 00:15:04.378 "type": "rebuild", 00:15:04.378 "target": "spare", 00:15:04.378 "progress": { 00:15:04.378 "blocks": 20480, 00:15:04.378 "percent": 16 00:15:04.378 } 00:15:04.378 }, 00:15:04.378 "base_bdevs_list": [ 00:15:04.378 { 00:15:04.378 "name": "spare", 00:15:04.378 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "BaseBdev2", 00:15:04.378 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "BaseBdev3", 00:15:04.378 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 } 00:15:04.378 ] 00:15:04.378 }' 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.378 [2024-11-10 15:24:10.499214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.378 [2024-11-10 15:24:10.548036] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:04.378 [2024-11-10 15:24:10.548096] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.378 [2024-11-10 15:24:10.548115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.378 [2024-11-10 15:24:10.548130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.378 "name": "raid_bdev1", 00:15:04.378 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:04.378 "strip_size_kb": 64, 00:15:04.378 "state": "online", 00:15:04.378 "raid_level": "raid5f", 00:15:04.378 "superblock": true, 00:15:04.378 "num_base_bdevs": 3, 00:15:04.378 "num_base_bdevs_discovered": 2, 00:15:04.378 "num_base_bdevs_operational": 2, 00:15:04.378 "base_bdevs_list": [ 00:15:04.378 { 00:15:04.378 "name": null, 00:15:04.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.378 "is_configured": false, 00:15:04.378 "data_offset": 0, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "BaseBdev2", 00:15:04.378 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 }, 00:15:04.378 { 00:15:04.378 "name": "BaseBdev3", 00:15:04.378 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:04.378 "is_configured": true, 00:15:04.378 "data_offset": 2048, 00:15:04.378 "data_size": 63488 00:15:04.378 } 00:15:04.378 ] 00:15:04.378 }' 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.378 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.638 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.638 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.638 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.638 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.638 15:24:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.898 "name": "raid_bdev1", 00:15:04.898 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:04.898 "strip_size_kb": 64, 00:15:04.898 "state": "online", 00:15:04.898 "raid_level": "raid5f", 00:15:04.898 "superblock": true, 00:15:04.898 "num_base_bdevs": 3, 00:15:04.898 "num_base_bdevs_discovered": 2, 00:15:04.898 "num_base_bdevs_operational": 2, 00:15:04.898 "base_bdevs_list": [ 00:15:04.898 { 00:15:04.898 "name": null, 00:15:04.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.898 "is_configured": false, 00:15:04.898 "data_offset": 0, 00:15:04.898 "data_size": 63488 00:15:04.898 }, 00:15:04.898 { 00:15:04.898 "name": "BaseBdev2", 00:15:04.898 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:04.898 "is_configured": true, 00:15:04.898 "data_offset": 2048, 00:15:04.898 "data_size": 63488 00:15:04.898 }, 00:15:04.898 { 00:15:04.898 "name": "BaseBdev3", 00:15:04.898 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:04.898 "is_configured": true, 00:15:04.898 "data_offset": 2048, 00:15:04.898 "data_size": 63488 00:15:04.898 } 00:15:04.898 ] 00:15:04.898 }' 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.898 [2024-11-10 15:24:11.146047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.898 [2024-11-10 15:24:11.152301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029460 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.898 15:24:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:04.898 [2024-11-10 15:24:11.154782] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.837 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.098 "name": "raid_bdev1", 00:15:06.098 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:06.098 "strip_size_kb": 64, 00:15:06.098 "state": "online", 00:15:06.098 "raid_level": "raid5f", 00:15:06.098 "superblock": true, 00:15:06.098 "num_base_bdevs": 3, 00:15:06.098 "num_base_bdevs_discovered": 3, 00:15:06.098 "num_base_bdevs_operational": 3, 00:15:06.098 "process": { 00:15:06.098 "type": "rebuild", 00:15:06.098 "target": "spare", 00:15:06.098 "progress": { 00:15:06.098 "blocks": 20480, 00:15:06.098 "percent": 16 00:15:06.098 } 00:15:06.098 }, 00:15:06.098 "base_bdevs_list": [ 00:15:06.098 { 00:15:06.098 "name": "spare", 00:15:06.098 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:06.098 "is_configured": true, 00:15:06.098 "data_offset": 2048, 00:15:06.098 "data_size": 63488 00:15:06.098 }, 00:15:06.098 { 00:15:06.098 "name": "BaseBdev2", 00:15:06.098 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:06.098 "is_configured": true, 00:15:06.098 "data_offset": 2048, 00:15:06.098 "data_size": 63488 00:15:06.098 }, 00:15:06.098 { 00:15:06.098 "name": "BaseBdev3", 00:15:06.098 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:06.098 "is_configured": true, 00:15:06.098 "data_offset": 2048, 00:15:06.098 "data_size": 63488 00:15:06.098 } 00:15:06.098 ] 00:15:06.098 }' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:06.098 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=466 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.098 "name": "raid_bdev1", 00:15:06.098 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:06.098 "strip_size_kb": 64, 00:15:06.098 "state": "online", 00:15:06.098 "raid_level": "raid5f", 00:15:06.098 "superblock": true, 00:15:06.098 "num_base_bdevs": 3, 00:15:06.098 "num_base_bdevs_discovered": 3, 00:15:06.098 "num_base_bdevs_operational": 3, 00:15:06.098 "process": { 00:15:06.098 "type": "rebuild", 00:15:06.098 "target": "spare", 00:15:06.098 "progress": { 00:15:06.098 "blocks": 22528, 00:15:06.098 "percent": 17 00:15:06.098 } 00:15:06.098 }, 00:15:06.098 "base_bdevs_list": [ 00:15:06.098 { 00:15:06.098 "name": "spare", 00:15:06.098 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:06.098 "is_configured": true, 00:15:06.098 "data_offset": 2048, 00:15:06.098 "data_size": 63488 00:15:06.098 }, 00:15:06.098 { 00:15:06.098 "name": "BaseBdev2", 00:15:06.098 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:06.098 "is_configured": true, 00:15:06.098 "data_offset": 2048, 00:15:06.098 "data_size": 63488 00:15:06.098 }, 00:15:06.098 { 00:15:06.098 "name": "BaseBdev3", 00:15:06.098 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:06.098 "is_configured": true, 00:15:06.098 "data_offset": 2048, 00:15:06.098 "data_size": 63488 00:15:06.098 } 00:15:06.098 ] 00:15:06.098 }' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.098 15:24:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.480 "name": "raid_bdev1", 00:15:07.480 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:07.480 "strip_size_kb": 64, 00:15:07.480 "state": "online", 00:15:07.480 "raid_level": "raid5f", 00:15:07.480 "superblock": true, 00:15:07.480 "num_base_bdevs": 3, 00:15:07.480 "num_base_bdevs_discovered": 3, 00:15:07.480 "num_base_bdevs_operational": 3, 00:15:07.480 "process": { 00:15:07.480 "type": "rebuild", 00:15:07.480 "target": "spare", 00:15:07.480 "progress": { 00:15:07.480 "blocks": 45056, 00:15:07.480 "percent": 35 00:15:07.480 } 00:15:07.480 }, 00:15:07.480 "base_bdevs_list": [ 00:15:07.480 { 00:15:07.480 "name": "spare", 00:15:07.480 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:07.480 "is_configured": true, 00:15:07.480 "data_offset": 2048, 00:15:07.480 "data_size": 63488 00:15:07.480 }, 00:15:07.480 { 00:15:07.480 "name": "BaseBdev2", 00:15:07.480 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:07.480 "is_configured": true, 00:15:07.480 "data_offset": 2048, 00:15:07.480 "data_size": 63488 00:15:07.480 }, 00:15:07.480 { 00:15:07.480 "name": "BaseBdev3", 00:15:07.480 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:07.480 "is_configured": true, 00:15:07.480 "data_offset": 2048, 00:15:07.480 "data_size": 63488 00:15:07.480 } 00:15:07.480 ] 00:15:07.480 }' 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.480 15:24:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.419 "name": "raid_bdev1", 00:15:08.419 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:08.419 "strip_size_kb": 64, 00:15:08.419 "state": "online", 00:15:08.419 "raid_level": "raid5f", 00:15:08.419 "superblock": true, 00:15:08.419 "num_base_bdevs": 3, 00:15:08.419 "num_base_bdevs_discovered": 3, 00:15:08.419 "num_base_bdevs_operational": 3, 00:15:08.419 "process": { 00:15:08.419 "type": "rebuild", 00:15:08.419 "target": "spare", 00:15:08.419 "progress": { 00:15:08.419 "blocks": 69632, 00:15:08.419 "percent": 54 00:15:08.419 } 00:15:08.419 }, 00:15:08.419 "base_bdevs_list": [ 00:15:08.419 { 00:15:08.419 "name": "spare", 00:15:08.419 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:08.419 "is_configured": true, 00:15:08.419 "data_offset": 2048, 00:15:08.419 "data_size": 63488 00:15:08.419 }, 00:15:08.419 { 00:15:08.419 "name": "BaseBdev2", 00:15:08.419 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:08.419 "is_configured": true, 00:15:08.419 "data_offset": 2048, 00:15:08.419 "data_size": 63488 00:15:08.419 }, 00:15:08.419 { 00:15:08.419 "name": "BaseBdev3", 00:15:08.419 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:08.419 "is_configured": true, 00:15:08.419 "data_offset": 2048, 00:15:08.419 "data_size": 63488 00:15:08.419 } 00:15:08.419 ] 00:15:08.419 }' 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.419 15:24:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.800 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.801 "name": "raid_bdev1", 00:15:09.801 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:09.801 "strip_size_kb": 64, 00:15:09.801 "state": "online", 00:15:09.801 "raid_level": "raid5f", 00:15:09.801 "superblock": true, 00:15:09.801 "num_base_bdevs": 3, 00:15:09.801 "num_base_bdevs_discovered": 3, 00:15:09.801 "num_base_bdevs_operational": 3, 00:15:09.801 "process": { 00:15:09.801 "type": "rebuild", 00:15:09.801 "target": "spare", 00:15:09.801 "progress": { 00:15:09.801 "blocks": 92160, 00:15:09.801 "percent": 72 00:15:09.801 } 00:15:09.801 }, 00:15:09.801 "base_bdevs_list": [ 00:15:09.801 { 00:15:09.801 "name": "spare", 00:15:09.801 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:09.801 "is_configured": true, 00:15:09.801 "data_offset": 2048, 00:15:09.801 "data_size": 63488 00:15:09.801 }, 00:15:09.801 { 00:15:09.801 "name": "BaseBdev2", 00:15:09.801 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:09.801 "is_configured": true, 00:15:09.801 "data_offset": 2048, 00:15:09.801 "data_size": 63488 00:15:09.801 }, 00:15:09.801 { 00:15:09.801 "name": "BaseBdev3", 00:15:09.801 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:09.801 "is_configured": true, 00:15:09.801 "data_offset": 2048, 00:15:09.801 "data_size": 63488 00:15:09.801 } 00:15:09.801 ] 00:15:09.801 }' 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.801 15:24:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.740 "name": "raid_bdev1", 00:15:10.740 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:10.740 "strip_size_kb": 64, 00:15:10.740 "state": "online", 00:15:10.740 "raid_level": "raid5f", 00:15:10.740 "superblock": true, 00:15:10.740 "num_base_bdevs": 3, 00:15:10.740 "num_base_bdevs_discovered": 3, 00:15:10.740 "num_base_bdevs_operational": 3, 00:15:10.740 "process": { 00:15:10.740 "type": "rebuild", 00:15:10.740 "target": "spare", 00:15:10.740 "progress": { 00:15:10.740 "blocks": 116736, 00:15:10.740 "percent": 91 00:15:10.740 } 00:15:10.740 }, 00:15:10.740 "base_bdevs_list": [ 00:15:10.740 { 00:15:10.740 "name": "spare", 00:15:10.740 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:10.740 "is_configured": true, 00:15:10.740 "data_offset": 2048, 00:15:10.740 "data_size": 63488 00:15:10.740 }, 00:15:10.740 { 00:15:10.740 "name": "BaseBdev2", 00:15:10.740 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:10.740 "is_configured": true, 00:15:10.740 "data_offset": 2048, 00:15:10.740 "data_size": 63488 00:15:10.740 }, 00:15:10.740 { 00:15:10.740 "name": "BaseBdev3", 00:15:10.740 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:10.740 "is_configured": true, 00:15:10.740 "data_offset": 2048, 00:15:10.740 "data_size": 63488 00:15:10.740 } 00:15:10.740 ] 00:15:10.740 }' 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.740 15:24:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.740 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.741 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.741 15:24:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.310 [2024-11-10 15:24:17.404831] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:11.310 [2024-11-10 15:24:17.404967] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:11.310 [2024-11-10 15:24:17.405156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.879 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.879 "name": "raid_bdev1", 00:15:11.879 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:11.879 "strip_size_kb": 64, 00:15:11.879 "state": "online", 00:15:11.879 "raid_level": "raid5f", 00:15:11.879 "superblock": true, 00:15:11.879 "num_base_bdevs": 3, 00:15:11.879 "num_base_bdevs_discovered": 3, 00:15:11.879 "num_base_bdevs_operational": 3, 00:15:11.879 "base_bdevs_list": [ 00:15:11.879 { 00:15:11.879 "name": "spare", 00:15:11.879 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:11.879 "is_configured": true, 00:15:11.879 "data_offset": 2048, 00:15:11.879 "data_size": 63488 00:15:11.879 }, 00:15:11.879 { 00:15:11.879 "name": "BaseBdev2", 00:15:11.879 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:11.879 "is_configured": true, 00:15:11.879 "data_offset": 2048, 00:15:11.879 "data_size": 63488 00:15:11.879 }, 00:15:11.879 { 00:15:11.879 "name": "BaseBdev3", 00:15:11.879 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:11.880 "is_configured": true, 00:15:11.880 "data_offset": 2048, 00:15:11.880 "data_size": 63488 00:15:11.880 } 00:15:11.880 ] 00:15:11.880 }' 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.880 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.140 "name": "raid_bdev1", 00:15:12.140 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:12.140 "strip_size_kb": 64, 00:15:12.140 "state": "online", 00:15:12.140 "raid_level": "raid5f", 00:15:12.140 "superblock": true, 00:15:12.140 "num_base_bdevs": 3, 00:15:12.140 "num_base_bdevs_discovered": 3, 00:15:12.140 "num_base_bdevs_operational": 3, 00:15:12.140 "base_bdevs_list": [ 00:15:12.140 { 00:15:12.140 "name": "spare", 00:15:12.140 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:12.140 "is_configured": true, 00:15:12.140 "data_offset": 2048, 00:15:12.140 "data_size": 63488 00:15:12.140 }, 00:15:12.140 { 00:15:12.140 "name": "BaseBdev2", 00:15:12.140 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:12.140 "is_configured": true, 00:15:12.140 "data_offset": 2048, 00:15:12.140 "data_size": 63488 00:15:12.140 }, 00:15:12.140 { 00:15:12.140 "name": "BaseBdev3", 00:15:12.140 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:12.140 "is_configured": true, 00:15:12.140 "data_offset": 2048, 00:15:12.140 "data_size": 63488 00:15:12.140 } 00:15:12.140 ] 00:15:12.140 }' 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.140 "name": "raid_bdev1", 00:15:12.140 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:12.140 "strip_size_kb": 64, 00:15:12.140 "state": "online", 00:15:12.140 "raid_level": "raid5f", 00:15:12.140 "superblock": true, 00:15:12.140 "num_base_bdevs": 3, 00:15:12.140 "num_base_bdevs_discovered": 3, 00:15:12.140 "num_base_bdevs_operational": 3, 00:15:12.140 "base_bdevs_list": [ 00:15:12.140 { 00:15:12.140 "name": "spare", 00:15:12.140 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:12.140 "is_configured": true, 00:15:12.140 "data_offset": 2048, 00:15:12.140 "data_size": 63488 00:15:12.140 }, 00:15:12.140 { 00:15:12.140 "name": "BaseBdev2", 00:15:12.140 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:12.140 "is_configured": true, 00:15:12.140 "data_offset": 2048, 00:15:12.140 "data_size": 63488 00:15:12.140 }, 00:15:12.140 { 00:15:12.140 "name": "BaseBdev3", 00:15:12.140 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:12.140 "is_configured": true, 00:15:12.140 "data_offset": 2048, 00:15:12.140 "data_size": 63488 00:15:12.140 } 00:15:12.140 ] 00:15:12.140 }' 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.140 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.711 [2024-11-10 15:24:18.806977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:12.711 [2024-11-10 15:24:18.807023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.711 [2024-11-10 15:24:18.807120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.711 [2024-11-10 15:24:18.807214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.711 [2024-11-10 15:24:18.807236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.711 15:24:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:12.711 /dev/nbd0 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.971 1+0 records in 00:15:12.971 1+0 records out 00:15:12.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446411 s, 9.2 MB/s 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.971 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:12.972 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:12.972 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.972 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:12.972 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:12.972 /dev/nbd1 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.231 1+0 records in 00:15:13.231 1+0 records out 00:15:13.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00407833 s, 1.0 MB/s 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.231 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.490 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.750 [2024-11-10 15:24:19.896600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:13.750 [2024-11-10 15:24:19.896726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.750 [2024-11-10 15:24:19.896753] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:13.750 [2024-11-10 15:24:19.896765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.750 [2024-11-10 15:24:19.899404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.750 [2024-11-10 15:24:19.899448] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:13.750 [2024-11-10 15:24:19.899540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:13.750 [2024-11-10 15:24:19.899601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:13.750 [2024-11-10 15:24:19.899751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.750 [2024-11-10 15:24:19.899857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.750 spare 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.750 15:24:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.750 [2024-11-10 15:24:19.999940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:13.750 [2024-11-10 15:24:19.999974] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.750 [2024-11-10 15:24:20.000275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047b10 00:15:13.750 [2024-11-10 15:24:20.000755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:13.750 [2024-11-10 15:24:20.000775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:13.750 [2024-11-10 15:24:20.000924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.750 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.750 "name": "raid_bdev1", 00:15:13.750 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:13.750 "strip_size_kb": 64, 00:15:13.750 "state": "online", 00:15:13.750 "raid_level": "raid5f", 00:15:13.750 "superblock": true, 00:15:13.750 "num_base_bdevs": 3, 00:15:13.750 "num_base_bdevs_discovered": 3, 00:15:13.750 "num_base_bdevs_operational": 3, 00:15:13.750 "base_bdevs_list": [ 00:15:13.750 { 00:15:13.750 "name": "spare", 00:15:13.750 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:13.750 "is_configured": true, 00:15:13.750 "data_offset": 2048, 00:15:13.750 "data_size": 63488 00:15:13.750 }, 00:15:13.750 { 00:15:13.750 "name": "BaseBdev2", 00:15:13.751 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:13.751 "is_configured": true, 00:15:13.751 "data_offset": 2048, 00:15:13.751 "data_size": 63488 00:15:13.751 }, 00:15:13.751 { 00:15:13.751 "name": "BaseBdev3", 00:15:13.751 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:13.751 "is_configured": true, 00:15:13.751 "data_offset": 2048, 00:15:13.751 "data_size": 63488 00:15:13.751 } 00:15:13.751 ] 00:15:13.751 }' 00:15:13.751 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.751 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:14.321 "name": "raid_bdev1", 00:15:14.321 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:14.321 "strip_size_kb": 64, 00:15:14.321 "state": "online", 00:15:14.321 "raid_level": "raid5f", 00:15:14.321 "superblock": true, 00:15:14.321 "num_base_bdevs": 3, 00:15:14.321 "num_base_bdevs_discovered": 3, 00:15:14.321 "num_base_bdevs_operational": 3, 00:15:14.321 "base_bdevs_list": [ 00:15:14.321 { 00:15:14.321 "name": "spare", 00:15:14.321 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:14.321 "is_configured": true, 00:15:14.321 "data_offset": 2048, 00:15:14.321 "data_size": 63488 00:15:14.321 }, 00:15:14.321 { 00:15:14.321 "name": "BaseBdev2", 00:15:14.321 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:14.321 "is_configured": true, 00:15:14.321 "data_offset": 2048, 00:15:14.321 "data_size": 63488 00:15:14.321 }, 00:15:14.321 { 00:15:14.321 "name": "BaseBdev3", 00:15:14.321 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:14.321 "is_configured": true, 00:15:14.321 "data_offset": 2048, 00:15:14.321 "data_size": 63488 00:15:14.321 } 00:15:14.321 ] 00:15:14.321 }' 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.321 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.321 [2024-11-10 15:24:20.677088] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.581 "name": "raid_bdev1", 00:15:14.581 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:14.581 "strip_size_kb": 64, 00:15:14.581 "state": "online", 00:15:14.581 "raid_level": "raid5f", 00:15:14.581 "superblock": true, 00:15:14.581 "num_base_bdevs": 3, 00:15:14.581 "num_base_bdevs_discovered": 2, 00:15:14.581 "num_base_bdevs_operational": 2, 00:15:14.581 "base_bdevs_list": [ 00:15:14.581 { 00:15:14.581 "name": null, 00:15:14.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.581 "is_configured": false, 00:15:14.581 "data_offset": 0, 00:15:14.581 "data_size": 63488 00:15:14.581 }, 00:15:14.581 { 00:15:14.581 "name": "BaseBdev2", 00:15:14.581 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:14.581 "is_configured": true, 00:15:14.581 "data_offset": 2048, 00:15:14.581 "data_size": 63488 00:15:14.581 }, 00:15:14.581 { 00:15:14.581 "name": "BaseBdev3", 00:15:14.581 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:14.581 "is_configured": true, 00:15:14.581 "data_offset": 2048, 00:15:14.581 "data_size": 63488 00:15:14.581 } 00:15:14.581 ] 00:15:14.581 }' 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.581 15:24:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.841 15:24:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:14.841 15:24:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.841 15:24:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.841 [2024-11-10 15:24:21.077197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.841 [2024-11-10 15:24:21.077421] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:14.841 [2024-11-10 15:24:21.077504] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:14.841 [2024-11-10 15:24:21.077602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.841 [2024-11-10 15:24:21.085412] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047be0 00:15:14.841 15:24:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.841 15:24:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:14.841 [2024-11-10 15:24:21.087946] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.780 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:16.040 "name": "raid_bdev1", 00:15:16.040 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:16.040 "strip_size_kb": 64, 00:15:16.040 "state": "online", 00:15:16.040 "raid_level": "raid5f", 00:15:16.040 "superblock": true, 00:15:16.040 "num_base_bdevs": 3, 00:15:16.040 "num_base_bdevs_discovered": 3, 00:15:16.040 "num_base_bdevs_operational": 3, 00:15:16.040 "process": { 00:15:16.040 "type": "rebuild", 00:15:16.040 "target": "spare", 00:15:16.040 "progress": { 00:15:16.040 "blocks": 20480, 00:15:16.040 "percent": 16 00:15:16.040 } 00:15:16.040 }, 00:15:16.040 "base_bdevs_list": [ 00:15:16.040 { 00:15:16.040 "name": "spare", 00:15:16.040 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:16.040 "is_configured": true, 00:15:16.040 "data_offset": 2048, 00:15:16.040 "data_size": 63488 00:15:16.040 }, 00:15:16.040 { 00:15:16.040 "name": "BaseBdev2", 00:15:16.040 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:16.040 "is_configured": true, 00:15:16.040 "data_offset": 2048, 00:15:16.040 "data_size": 63488 00:15:16.040 }, 00:15:16.040 { 00:15:16.040 "name": "BaseBdev3", 00:15:16.040 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:16.040 "is_configured": true, 00:15:16.040 "data_offset": 2048, 00:15:16.040 "data_size": 63488 00:15:16.040 } 00:15:16.040 ] 00:15:16.040 }' 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 [2024-11-10 15:24:22.249883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.040 [2024-11-10 15:24:22.298075] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:16.040 [2024-11-10 15:24:22.298135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.040 [2024-11-10 15:24:22.298151] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.040 [2024-11-10 15:24:22.298167] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.040 "name": "raid_bdev1", 00:15:16.040 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:16.040 "strip_size_kb": 64, 00:15:16.040 "state": "online", 00:15:16.040 "raid_level": "raid5f", 00:15:16.040 "superblock": true, 00:15:16.040 "num_base_bdevs": 3, 00:15:16.040 "num_base_bdevs_discovered": 2, 00:15:16.040 "num_base_bdevs_operational": 2, 00:15:16.040 "base_bdevs_list": [ 00:15:16.040 { 00:15:16.040 "name": null, 00:15:16.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.040 "is_configured": false, 00:15:16.040 "data_offset": 0, 00:15:16.040 "data_size": 63488 00:15:16.040 }, 00:15:16.040 { 00:15:16.040 "name": "BaseBdev2", 00:15:16.040 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:16.040 "is_configured": true, 00:15:16.040 "data_offset": 2048, 00:15:16.040 "data_size": 63488 00:15:16.040 }, 00:15:16.040 { 00:15:16.040 "name": "BaseBdev3", 00:15:16.040 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:16.040 "is_configured": true, 00:15:16.040 "data_offset": 2048, 00:15:16.040 "data_size": 63488 00:15:16.040 } 00:15:16.040 ] 00:15:16.040 }' 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.040 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.611 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.611 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.611 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.611 [2024-11-10 15:24:22.779467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.611 [2024-11-10 15:24:22.779586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.611 [2024-11-10 15:24:22.779628] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:16.611 [2024-11-10 15:24:22.779659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.611 [2024-11-10 15:24:22.780209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.611 [2024-11-10 15:24:22.780276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.611 [2024-11-10 15:24:22.780390] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:16.611 [2024-11-10 15:24:22.780433] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.611 [2024-11-10 15:24:22.780479] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.611 [2024-11-10 15:24:22.780533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.611 [2024-11-10 15:24:22.787514] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047cb0 00:15:16.611 spare 00:15:16.611 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.611 15:24:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:16.611 [2024-11-10 15:24:22.789974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.550 "name": "raid_bdev1", 00:15:17.550 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:17.550 "strip_size_kb": 64, 00:15:17.550 "state": "online", 00:15:17.550 "raid_level": "raid5f", 00:15:17.550 "superblock": true, 00:15:17.550 "num_base_bdevs": 3, 00:15:17.550 "num_base_bdevs_discovered": 3, 00:15:17.550 "num_base_bdevs_operational": 3, 00:15:17.550 "process": { 00:15:17.550 "type": "rebuild", 00:15:17.550 "target": "spare", 00:15:17.550 "progress": { 00:15:17.550 "blocks": 20480, 00:15:17.550 "percent": 16 00:15:17.550 } 00:15:17.550 }, 00:15:17.550 "base_bdevs_list": [ 00:15:17.550 { 00:15:17.550 "name": "spare", 00:15:17.550 "uuid": "f42e7490-98ec-5e65-b8de-14233b655d8c", 00:15:17.550 "is_configured": true, 00:15:17.550 "data_offset": 2048, 00:15:17.550 "data_size": 63488 00:15:17.550 }, 00:15:17.550 { 00:15:17.550 "name": "BaseBdev2", 00:15:17.550 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:17.550 "is_configured": true, 00:15:17.550 "data_offset": 2048, 00:15:17.550 "data_size": 63488 00:15:17.550 }, 00:15:17.550 { 00:15:17.550 "name": "BaseBdev3", 00:15:17.550 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:17.550 "is_configured": true, 00:15:17.550 "data_offset": 2048, 00:15:17.550 "data_size": 63488 00:15:17.550 } 00:15:17.550 ] 00:15:17.550 }' 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.550 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.810 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.810 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.810 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.810 15:24:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.810 [2024-11-10 15:24:23.951914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.810 [2024-11-10 15:24:24.000097] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.810 [2024-11-10 15:24:24.000147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.810 [2024-11-10 15:24:24.000170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.810 [2024-11-10 15:24:24.000177] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.810 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.810 "name": "raid_bdev1", 00:15:17.810 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:17.810 "strip_size_kb": 64, 00:15:17.810 "state": "online", 00:15:17.810 "raid_level": "raid5f", 00:15:17.810 "superblock": true, 00:15:17.810 "num_base_bdevs": 3, 00:15:17.810 "num_base_bdevs_discovered": 2, 00:15:17.810 "num_base_bdevs_operational": 2, 00:15:17.810 "base_bdevs_list": [ 00:15:17.810 { 00:15:17.810 "name": null, 00:15:17.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.810 "is_configured": false, 00:15:17.810 "data_offset": 0, 00:15:17.810 "data_size": 63488 00:15:17.810 }, 00:15:17.810 { 00:15:17.810 "name": "BaseBdev2", 00:15:17.810 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:17.810 "is_configured": true, 00:15:17.810 "data_offset": 2048, 00:15:17.810 "data_size": 63488 00:15:17.810 }, 00:15:17.810 { 00:15:17.810 "name": "BaseBdev3", 00:15:17.810 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:17.810 "is_configured": true, 00:15:17.810 "data_offset": 2048, 00:15:17.810 "data_size": 63488 00:15:17.810 } 00:15:17.810 ] 00:15:17.810 }' 00:15:17.811 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.811 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:18.380 "name": "raid_bdev1", 00:15:18.380 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:18.380 "strip_size_kb": 64, 00:15:18.380 "state": "online", 00:15:18.380 "raid_level": "raid5f", 00:15:18.380 "superblock": true, 00:15:18.380 "num_base_bdevs": 3, 00:15:18.380 "num_base_bdevs_discovered": 2, 00:15:18.380 "num_base_bdevs_operational": 2, 00:15:18.380 "base_bdevs_list": [ 00:15:18.380 { 00:15:18.380 "name": null, 00:15:18.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.380 "is_configured": false, 00:15:18.380 "data_offset": 0, 00:15:18.380 "data_size": 63488 00:15:18.380 }, 00:15:18.380 { 00:15:18.380 "name": "BaseBdev2", 00:15:18.380 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:18.380 "is_configured": true, 00:15:18.380 "data_offset": 2048, 00:15:18.380 "data_size": 63488 00:15:18.380 }, 00:15:18.380 { 00:15:18.380 "name": "BaseBdev3", 00:15:18.380 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:18.380 "is_configured": true, 00:15:18.380 "data_offset": 2048, 00:15:18.380 "data_size": 63488 00:15:18.380 } 00:15:18.380 ] 00:15:18.380 }' 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.380 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:18.380 [2024-11-10 15:24:24.649572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:18.380 [2024-11-10 15:24:24.649624] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.380 [2024-11-10 15:24:24.649648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:18.380 [2024-11-10 15:24:24.649657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.380 [2024-11-10 15:24:24.650125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.380 [2024-11-10 15:24:24.650144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.381 [2024-11-10 15:24:24.650219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:18.381 [2024-11-10 15:24:24.650234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.381 [2024-11-10 15:24:24.650244] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:18.381 [2024-11-10 15:24:24.650263] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:18.381 BaseBdev1 00:15:18.381 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.381 15:24:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.320 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.580 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.580 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.580 "name": "raid_bdev1", 00:15:19.580 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:19.580 "strip_size_kb": 64, 00:15:19.580 "state": "online", 00:15:19.580 "raid_level": "raid5f", 00:15:19.580 "superblock": true, 00:15:19.580 "num_base_bdevs": 3, 00:15:19.580 "num_base_bdevs_discovered": 2, 00:15:19.580 "num_base_bdevs_operational": 2, 00:15:19.580 "base_bdevs_list": [ 00:15:19.580 { 00:15:19.580 "name": null, 00:15:19.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.580 "is_configured": false, 00:15:19.580 "data_offset": 0, 00:15:19.580 "data_size": 63488 00:15:19.580 }, 00:15:19.580 { 00:15:19.580 "name": "BaseBdev2", 00:15:19.580 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:19.580 "is_configured": true, 00:15:19.580 "data_offset": 2048, 00:15:19.580 "data_size": 63488 00:15:19.580 }, 00:15:19.580 { 00:15:19.580 "name": "BaseBdev3", 00:15:19.580 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:19.580 "is_configured": true, 00:15:19.580 "data_offset": 2048, 00:15:19.580 "data_size": 63488 00:15:19.580 } 00:15:19.580 ] 00:15:19.580 }' 00:15:19.580 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.580 15:24:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.840 "name": "raid_bdev1", 00:15:19.840 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:19.840 "strip_size_kb": 64, 00:15:19.840 "state": "online", 00:15:19.840 "raid_level": "raid5f", 00:15:19.840 "superblock": true, 00:15:19.840 "num_base_bdevs": 3, 00:15:19.840 "num_base_bdevs_discovered": 2, 00:15:19.840 "num_base_bdevs_operational": 2, 00:15:19.840 "base_bdevs_list": [ 00:15:19.840 { 00:15:19.840 "name": null, 00:15:19.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.840 "is_configured": false, 00:15:19.840 "data_offset": 0, 00:15:19.840 "data_size": 63488 00:15:19.840 }, 00:15:19.840 { 00:15:19.840 "name": "BaseBdev2", 00:15:19.840 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:19.840 "is_configured": true, 00:15:19.840 "data_offset": 2048, 00:15:19.840 "data_size": 63488 00:15:19.840 }, 00:15:19.840 { 00:15:19.840 "name": "BaseBdev3", 00:15:19.840 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:19.840 "is_configured": true, 00:15:19.840 "data_offset": 2048, 00:15:19.840 "data_size": 63488 00:15:19.840 } 00:15:19.840 ] 00:15:19.840 }' 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.840 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.100 [2024-11-10 15:24:26.250066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.100 [2024-11-10 15:24:26.250222] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.100 [2024-11-10 15:24:26.250237] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:20.100 request: 00:15:20.100 { 00:15:20.100 "base_bdev": "BaseBdev1", 00:15:20.100 "raid_bdev": "raid_bdev1", 00:15:20.100 "method": "bdev_raid_add_base_bdev", 00:15:20.100 "req_id": 1 00:15:20.100 } 00:15:20.100 Got JSON-RPC error response 00:15:20.100 response: 00:15:20.100 { 00:15:20.100 "code": -22, 00:15:20.100 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:20.100 } 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:20.100 15:24:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.042 "name": "raid_bdev1", 00:15:21.042 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:21.042 "strip_size_kb": 64, 00:15:21.042 "state": "online", 00:15:21.042 "raid_level": "raid5f", 00:15:21.042 "superblock": true, 00:15:21.042 "num_base_bdevs": 3, 00:15:21.042 "num_base_bdevs_discovered": 2, 00:15:21.042 "num_base_bdevs_operational": 2, 00:15:21.042 "base_bdevs_list": [ 00:15:21.042 { 00:15:21.042 "name": null, 00:15:21.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.042 "is_configured": false, 00:15:21.042 "data_offset": 0, 00:15:21.042 "data_size": 63488 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "name": "BaseBdev2", 00:15:21.042 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:21.042 "is_configured": true, 00:15:21.042 "data_offset": 2048, 00:15:21.042 "data_size": 63488 00:15:21.042 }, 00:15:21.042 { 00:15:21.042 "name": "BaseBdev3", 00:15:21.042 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:21.042 "is_configured": true, 00:15:21.042 "data_offset": 2048, 00:15:21.042 "data_size": 63488 00:15:21.042 } 00:15:21.042 ] 00:15:21.042 }' 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.042 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.614 "name": "raid_bdev1", 00:15:21.614 "uuid": "435abe30-a9b5-4f1f-817f-6ca5ef600f08", 00:15:21.614 "strip_size_kb": 64, 00:15:21.614 "state": "online", 00:15:21.614 "raid_level": "raid5f", 00:15:21.614 "superblock": true, 00:15:21.614 "num_base_bdevs": 3, 00:15:21.614 "num_base_bdevs_discovered": 2, 00:15:21.614 "num_base_bdevs_operational": 2, 00:15:21.614 "base_bdevs_list": [ 00:15:21.614 { 00:15:21.614 "name": null, 00:15:21.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.614 "is_configured": false, 00:15:21.614 "data_offset": 0, 00:15:21.614 "data_size": 63488 00:15:21.614 }, 00:15:21.614 { 00:15:21.614 "name": "BaseBdev2", 00:15:21.614 "uuid": "4575eb5e-e88d-51b7-b878-77269767ac0c", 00:15:21.614 "is_configured": true, 00:15:21.614 "data_offset": 2048, 00:15:21.614 "data_size": 63488 00:15:21.614 }, 00:15:21.614 { 00:15:21.614 "name": "BaseBdev3", 00:15:21.614 "uuid": "165cac35-d712-5ad4-8425-1911d97490f0", 00:15:21.614 "is_configured": true, 00:15:21.614 "data_offset": 2048, 00:15:21.614 "data_size": 63488 00:15:21.614 } 00:15:21.614 ] 00:15:21.614 }' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 93907 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 93907 ']' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 93907 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 93907 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:21.614 killing process with pid 93907 00:15:21.614 Received shutdown signal, test time was about 60.000000 seconds 00:15:21.614 00:15:21.614 Latency(us) 00:15:21.614 [2024-11-10T15:24:27.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.614 [2024-11-10T15:24:27.977Z] =================================================================================================================== 00:15:21.614 [2024-11-10T15:24:27.977Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 93907' 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 93907 00:15:21.614 [2024-11-10 15:24:27.913487] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.614 [2024-11-10 15:24:27.913605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.614 15:24:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 93907 00:15:21.614 [2024-11-10 15:24:27.913668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.614 [2024-11-10 15:24:27.913681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:21.874 [2024-11-10 15:24:27.989233] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.135 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:22.135 ************************************ 00:15:22.135 END TEST raid5f_rebuild_test_sb 00:15:22.135 ************************************ 00:15:22.135 00:15:22.135 real 0m21.872s 00:15:22.135 user 0m28.321s 00:15:22.135 sys 0m2.941s 00:15:22.135 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.135 15:24:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.135 15:24:28 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:22.135 15:24:28 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:22.135 15:24:28 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:22.135 15:24:28 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.135 15:24:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.135 ************************************ 00:15:22.135 START TEST raid5f_state_function_test 00:15:22.135 ************************************ 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 false 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=94642 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 94642' 00:15:22.135 Process raid pid: 94642 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 94642 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@833 -- # '[' -z 94642 ']' 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:22.135 15:24:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.396 [2024-11-10 15:24:28.498168] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:15:22.396 [2024-11-10 15:24:28.498471] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.396 [2024-11-10 15:24:28.639886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:22.396 [2024-11-10 15:24:28.676466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.396 [2024-11-10 15:24:28.717664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.656 [2024-11-10 15:24:28.793764] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:22.656 [2024-11-10 15:24:28.793869] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@866 -- # return 0 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.227 [2024-11-10 15:24:29.329829] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.227 [2024-11-10 15:24:29.329959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.227 [2024-11-10 15:24:29.329994] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.227 [2024-11-10 15:24:29.330029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.227 [2024-11-10 15:24:29.330053] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.227 [2024-11-10 15:24:29.330089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.227 [2024-11-10 15:24:29.330123] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.227 [2024-11-10 15:24:29.330143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.227 "name": "Existed_Raid", 00:15:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.227 "strip_size_kb": 64, 00:15:23.227 "state": "configuring", 00:15:23.227 "raid_level": "raid5f", 00:15:23.227 "superblock": false, 00:15:23.227 "num_base_bdevs": 4, 00:15:23.227 "num_base_bdevs_discovered": 0, 00:15:23.227 "num_base_bdevs_operational": 4, 00:15:23.227 "base_bdevs_list": [ 00:15:23.227 { 00:15:23.227 "name": "BaseBdev1", 00:15:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.227 "is_configured": false, 00:15:23.227 "data_offset": 0, 00:15:23.227 "data_size": 0 00:15:23.227 }, 00:15:23.227 { 00:15:23.227 "name": "BaseBdev2", 00:15:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.227 "is_configured": false, 00:15:23.227 "data_offset": 0, 00:15:23.227 "data_size": 0 00:15:23.227 }, 00:15:23.227 { 00:15:23.227 "name": "BaseBdev3", 00:15:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.227 "is_configured": false, 00:15:23.227 "data_offset": 0, 00:15:23.227 "data_size": 0 00:15:23.227 }, 00:15:23.227 { 00:15:23.227 "name": "BaseBdev4", 00:15:23.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.227 "is_configured": false, 00:15:23.227 "data_offset": 0, 00:15:23.227 "data_size": 0 00:15:23.227 } 00:15:23.227 ] 00:15:23.227 }' 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.227 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.488 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.489 [2024-11-10 15:24:29.761795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:23.489 [2024-11-10 15:24:29.761871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.489 [2024-11-10 15:24:29.773839] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:23.489 [2024-11-10 15:24:29.773932] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:23.489 [2024-11-10 15:24:29.773962] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:23.489 [2024-11-10 15:24:29.773983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:23.489 [2024-11-10 15:24:29.774003] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:23.489 [2024-11-10 15:24:29.774051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:23.489 [2024-11-10 15:24:29.774072] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:23.489 [2024-11-10 15:24:29.774099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.489 [2024-11-10 15:24:29.800870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:23.489 BaseBdev1 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.489 [ 00:15:23.489 { 00:15:23.489 "name": "BaseBdev1", 00:15:23.489 "aliases": [ 00:15:23.489 "c8eeed71-bb6a-4142-b3ca-fe4746532488" 00:15:23.489 ], 00:15:23.489 "product_name": "Malloc disk", 00:15:23.489 "block_size": 512, 00:15:23.489 "num_blocks": 65536, 00:15:23.489 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:23.489 "assigned_rate_limits": { 00:15:23.489 "rw_ios_per_sec": 0, 00:15:23.489 "rw_mbytes_per_sec": 0, 00:15:23.489 "r_mbytes_per_sec": 0, 00:15:23.489 "w_mbytes_per_sec": 0 00:15:23.489 }, 00:15:23.489 "claimed": true, 00:15:23.489 "claim_type": "exclusive_write", 00:15:23.489 "zoned": false, 00:15:23.489 "supported_io_types": { 00:15:23.489 "read": true, 00:15:23.489 "write": true, 00:15:23.489 "unmap": true, 00:15:23.489 "flush": true, 00:15:23.489 "reset": true, 00:15:23.489 "nvme_admin": false, 00:15:23.489 "nvme_io": false, 00:15:23.489 "nvme_io_md": false, 00:15:23.489 "write_zeroes": true, 00:15:23.489 "zcopy": true, 00:15:23.489 "get_zone_info": false, 00:15:23.489 "zone_management": false, 00:15:23.489 "zone_append": false, 00:15:23.489 "compare": false, 00:15:23.489 "compare_and_write": false, 00:15:23.489 "abort": true, 00:15:23.489 "seek_hole": false, 00:15:23.489 "seek_data": false, 00:15:23.489 "copy": true, 00:15:23.489 "nvme_iov_md": false 00:15:23.489 }, 00:15:23.489 "memory_domains": [ 00:15:23.489 { 00:15:23.489 "dma_device_id": "system", 00:15:23.489 "dma_device_type": 1 00:15:23.489 }, 00:15:23.489 { 00:15:23.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:23.489 "dma_device_type": 2 00:15:23.489 } 00:15:23.489 ], 00:15:23.489 "driver_specific": {} 00:15:23.489 } 00:15:23.489 ] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.489 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.749 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.749 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.749 "name": "Existed_Raid", 00:15:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.749 "strip_size_kb": 64, 00:15:23.749 "state": "configuring", 00:15:23.749 "raid_level": "raid5f", 00:15:23.749 "superblock": false, 00:15:23.749 "num_base_bdevs": 4, 00:15:23.749 "num_base_bdevs_discovered": 1, 00:15:23.749 "num_base_bdevs_operational": 4, 00:15:23.749 "base_bdevs_list": [ 00:15:23.749 { 00:15:23.749 "name": "BaseBdev1", 00:15:23.749 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:23.749 "is_configured": true, 00:15:23.749 "data_offset": 0, 00:15:23.749 "data_size": 65536 00:15:23.749 }, 00:15:23.749 { 00:15:23.749 "name": "BaseBdev2", 00:15:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.749 "is_configured": false, 00:15:23.749 "data_offset": 0, 00:15:23.749 "data_size": 0 00:15:23.749 }, 00:15:23.749 { 00:15:23.749 "name": "BaseBdev3", 00:15:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.749 "is_configured": false, 00:15:23.749 "data_offset": 0, 00:15:23.749 "data_size": 0 00:15:23.749 }, 00:15:23.749 { 00:15:23.749 "name": "BaseBdev4", 00:15:23.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.749 "is_configured": false, 00:15:23.749 "data_offset": 0, 00:15:23.749 "data_size": 0 00:15:23.749 } 00:15:23.749 ] 00:15:23.749 }' 00:15:23.749 15:24:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.749 15:24:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.010 [2024-11-10 15:24:30.260994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:24.010 [2024-11-10 15:24:30.261118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.010 [2024-11-10 15:24:30.273046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.010 [2024-11-10 15:24:30.275159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.010 [2024-11-10 15:24:30.275198] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.010 [2024-11-10 15:24:30.275209] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.010 [2024-11-10 15:24:30.275217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.010 [2024-11-10 15:24:30.275224] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.010 [2024-11-10 15:24:30.275231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.010 "name": "Existed_Raid", 00:15:24.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.010 "strip_size_kb": 64, 00:15:24.010 "state": "configuring", 00:15:24.010 "raid_level": "raid5f", 00:15:24.010 "superblock": false, 00:15:24.010 "num_base_bdevs": 4, 00:15:24.010 "num_base_bdevs_discovered": 1, 00:15:24.010 "num_base_bdevs_operational": 4, 00:15:24.010 "base_bdevs_list": [ 00:15:24.010 { 00:15:24.010 "name": "BaseBdev1", 00:15:24.010 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:24.010 "is_configured": true, 00:15:24.010 "data_offset": 0, 00:15:24.010 "data_size": 65536 00:15:24.010 }, 00:15:24.010 { 00:15:24.010 "name": "BaseBdev2", 00:15:24.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.010 "is_configured": false, 00:15:24.010 "data_offset": 0, 00:15:24.010 "data_size": 0 00:15:24.010 }, 00:15:24.010 { 00:15:24.010 "name": "BaseBdev3", 00:15:24.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.010 "is_configured": false, 00:15:24.010 "data_offset": 0, 00:15:24.010 "data_size": 0 00:15:24.010 }, 00:15:24.010 { 00:15:24.010 "name": "BaseBdev4", 00:15:24.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.010 "is_configured": false, 00:15:24.010 "data_offset": 0, 00:15:24.010 "data_size": 0 00:15:24.010 } 00:15:24.010 ] 00:15:24.010 }' 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.010 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.581 [2024-11-10 15:24:30.741844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.581 BaseBdev2 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.581 [ 00:15:24.581 { 00:15:24.581 "name": "BaseBdev2", 00:15:24.581 "aliases": [ 00:15:24.581 "d49dc6a1-3400-4608-acfb-b70ec0ed6430" 00:15:24.581 ], 00:15:24.581 "product_name": "Malloc disk", 00:15:24.581 "block_size": 512, 00:15:24.581 "num_blocks": 65536, 00:15:24.581 "uuid": "d49dc6a1-3400-4608-acfb-b70ec0ed6430", 00:15:24.581 "assigned_rate_limits": { 00:15:24.581 "rw_ios_per_sec": 0, 00:15:24.581 "rw_mbytes_per_sec": 0, 00:15:24.581 "r_mbytes_per_sec": 0, 00:15:24.581 "w_mbytes_per_sec": 0 00:15:24.581 }, 00:15:24.581 "claimed": true, 00:15:24.581 "claim_type": "exclusive_write", 00:15:24.581 "zoned": false, 00:15:24.581 "supported_io_types": { 00:15:24.581 "read": true, 00:15:24.581 "write": true, 00:15:24.581 "unmap": true, 00:15:24.581 "flush": true, 00:15:24.581 "reset": true, 00:15:24.581 "nvme_admin": false, 00:15:24.581 "nvme_io": false, 00:15:24.581 "nvme_io_md": false, 00:15:24.581 "write_zeroes": true, 00:15:24.581 "zcopy": true, 00:15:24.581 "get_zone_info": false, 00:15:24.581 "zone_management": false, 00:15:24.581 "zone_append": false, 00:15:24.581 "compare": false, 00:15:24.581 "compare_and_write": false, 00:15:24.581 "abort": true, 00:15:24.581 "seek_hole": false, 00:15:24.581 "seek_data": false, 00:15:24.581 "copy": true, 00:15:24.581 "nvme_iov_md": false 00:15:24.581 }, 00:15:24.581 "memory_domains": [ 00:15:24.581 { 00:15:24.581 "dma_device_id": "system", 00:15:24.581 "dma_device_type": 1 00:15:24.581 }, 00:15:24.581 { 00:15:24.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:24.581 "dma_device_type": 2 00:15:24.581 } 00:15:24.581 ], 00:15:24.581 "driver_specific": {} 00:15:24.581 } 00:15:24.581 ] 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.581 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.581 "name": "Existed_Raid", 00:15:24.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.581 "strip_size_kb": 64, 00:15:24.581 "state": "configuring", 00:15:24.581 "raid_level": "raid5f", 00:15:24.581 "superblock": false, 00:15:24.581 "num_base_bdevs": 4, 00:15:24.581 "num_base_bdevs_discovered": 2, 00:15:24.581 "num_base_bdevs_operational": 4, 00:15:24.581 "base_bdevs_list": [ 00:15:24.581 { 00:15:24.581 "name": "BaseBdev1", 00:15:24.581 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:24.581 "is_configured": true, 00:15:24.581 "data_offset": 0, 00:15:24.581 "data_size": 65536 00:15:24.581 }, 00:15:24.581 { 00:15:24.581 "name": "BaseBdev2", 00:15:24.581 "uuid": "d49dc6a1-3400-4608-acfb-b70ec0ed6430", 00:15:24.581 "is_configured": true, 00:15:24.581 "data_offset": 0, 00:15:24.581 "data_size": 65536 00:15:24.581 }, 00:15:24.581 { 00:15:24.581 "name": "BaseBdev3", 00:15:24.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.581 "is_configured": false, 00:15:24.581 "data_offset": 0, 00:15:24.581 "data_size": 0 00:15:24.581 }, 00:15:24.582 { 00:15:24.582 "name": "BaseBdev4", 00:15:24.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.582 "is_configured": false, 00:15:24.582 "data_offset": 0, 00:15:24.582 "data_size": 0 00:15:24.582 } 00:15:24.582 ] 00:15:24.582 }' 00:15:24.582 15:24:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.582 15:24:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.153 [2024-11-10 15:24:31.269625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:25.153 BaseBdev3 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.153 [ 00:15:25.153 { 00:15:25.153 "name": "BaseBdev3", 00:15:25.153 "aliases": [ 00:15:25.153 "66b4f336-1966-49eb-959e-ae70f4f48216" 00:15:25.153 ], 00:15:25.153 "product_name": "Malloc disk", 00:15:25.153 "block_size": 512, 00:15:25.153 "num_blocks": 65536, 00:15:25.153 "uuid": "66b4f336-1966-49eb-959e-ae70f4f48216", 00:15:25.153 "assigned_rate_limits": { 00:15:25.153 "rw_ios_per_sec": 0, 00:15:25.153 "rw_mbytes_per_sec": 0, 00:15:25.153 "r_mbytes_per_sec": 0, 00:15:25.153 "w_mbytes_per_sec": 0 00:15:25.153 }, 00:15:25.153 "claimed": true, 00:15:25.153 "claim_type": "exclusive_write", 00:15:25.153 "zoned": false, 00:15:25.153 "supported_io_types": { 00:15:25.153 "read": true, 00:15:25.153 "write": true, 00:15:25.153 "unmap": true, 00:15:25.153 "flush": true, 00:15:25.153 "reset": true, 00:15:25.153 "nvme_admin": false, 00:15:25.153 "nvme_io": false, 00:15:25.153 "nvme_io_md": false, 00:15:25.153 "write_zeroes": true, 00:15:25.153 "zcopy": true, 00:15:25.153 "get_zone_info": false, 00:15:25.153 "zone_management": false, 00:15:25.153 "zone_append": false, 00:15:25.153 "compare": false, 00:15:25.153 "compare_and_write": false, 00:15:25.153 "abort": true, 00:15:25.153 "seek_hole": false, 00:15:25.153 "seek_data": false, 00:15:25.153 "copy": true, 00:15:25.153 "nvme_iov_md": false 00:15:25.153 }, 00:15:25.153 "memory_domains": [ 00:15:25.153 { 00:15:25.153 "dma_device_id": "system", 00:15:25.153 "dma_device_type": 1 00:15:25.153 }, 00:15:25.153 { 00:15:25.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.153 "dma_device_type": 2 00:15:25.153 } 00:15:25.153 ], 00:15:25.153 "driver_specific": {} 00:15:25.153 } 00:15:25.153 ] 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.153 "name": "Existed_Raid", 00:15:25.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.153 "strip_size_kb": 64, 00:15:25.153 "state": "configuring", 00:15:25.153 "raid_level": "raid5f", 00:15:25.153 "superblock": false, 00:15:25.153 "num_base_bdevs": 4, 00:15:25.153 "num_base_bdevs_discovered": 3, 00:15:25.153 "num_base_bdevs_operational": 4, 00:15:25.153 "base_bdevs_list": [ 00:15:25.153 { 00:15:25.153 "name": "BaseBdev1", 00:15:25.153 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:25.153 "is_configured": true, 00:15:25.153 "data_offset": 0, 00:15:25.153 "data_size": 65536 00:15:25.153 }, 00:15:25.153 { 00:15:25.153 "name": "BaseBdev2", 00:15:25.153 "uuid": "d49dc6a1-3400-4608-acfb-b70ec0ed6430", 00:15:25.153 "is_configured": true, 00:15:25.153 "data_offset": 0, 00:15:25.153 "data_size": 65536 00:15:25.153 }, 00:15:25.153 { 00:15:25.153 "name": "BaseBdev3", 00:15:25.153 "uuid": "66b4f336-1966-49eb-959e-ae70f4f48216", 00:15:25.153 "is_configured": true, 00:15:25.153 "data_offset": 0, 00:15:25.153 "data_size": 65536 00:15:25.153 }, 00:15:25.153 { 00:15:25.153 "name": "BaseBdev4", 00:15:25.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.153 "is_configured": false, 00:15:25.153 "data_offset": 0, 00:15:25.153 "data_size": 0 00:15:25.153 } 00:15:25.153 ] 00:15:25.153 }' 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.153 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.723 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:25.723 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 [2024-11-10 15:24:31.790350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:25.724 [2024-11-10 15:24:31.790486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:25.724 [2024-11-10 15:24:31.790519] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:25.724 [2024-11-10 15:24:31.790845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:25.724 [2024-11-10 15:24:31.791410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:25.724 [2024-11-10 15:24:31.791461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:25.724 [2024-11-10 15:24:31.791757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.724 BaseBdev4 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 [ 00:15:25.724 { 00:15:25.724 "name": "BaseBdev4", 00:15:25.724 "aliases": [ 00:15:25.724 "76fbf668-a4cc-48b4-81b7-a84c72b800a7" 00:15:25.724 ], 00:15:25.724 "product_name": "Malloc disk", 00:15:25.724 "block_size": 512, 00:15:25.724 "num_blocks": 65536, 00:15:25.724 "uuid": "76fbf668-a4cc-48b4-81b7-a84c72b800a7", 00:15:25.724 "assigned_rate_limits": { 00:15:25.724 "rw_ios_per_sec": 0, 00:15:25.724 "rw_mbytes_per_sec": 0, 00:15:25.724 "r_mbytes_per_sec": 0, 00:15:25.724 "w_mbytes_per_sec": 0 00:15:25.724 }, 00:15:25.724 "claimed": true, 00:15:25.724 "claim_type": "exclusive_write", 00:15:25.724 "zoned": false, 00:15:25.724 "supported_io_types": { 00:15:25.724 "read": true, 00:15:25.724 "write": true, 00:15:25.724 "unmap": true, 00:15:25.724 "flush": true, 00:15:25.724 "reset": true, 00:15:25.724 "nvme_admin": false, 00:15:25.724 "nvme_io": false, 00:15:25.724 "nvme_io_md": false, 00:15:25.724 "write_zeroes": true, 00:15:25.724 "zcopy": true, 00:15:25.724 "get_zone_info": false, 00:15:25.724 "zone_management": false, 00:15:25.724 "zone_append": false, 00:15:25.724 "compare": false, 00:15:25.724 "compare_and_write": false, 00:15:25.724 "abort": true, 00:15:25.724 "seek_hole": false, 00:15:25.724 "seek_data": false, 00:15:25.724 "copy": true, 00:15:25.724 "nvme_iov_md": false 00:15:25.724 }, 00:15:25.724 "memory_domains": [ 00:15:25.724 { 00:15:25.724 "dma_device_id": "system", 00:15:25.724 "dma_device_type": 1 00:15:25.724 }, 00:15:25.724 { 00:15:25.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.724 "dma_device_type": 2 00:15:25.724 } 00:15:25.724 ], 00:15:25.724 "driver_specific": {} 00:15:25.724 } 00:15:25.724 ] 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.724 "name": "Existed_Raid", 00:15:25.724 "uuid": "ffbc9948-7419-4274-a1c6-304f18878af0", 00:15:25.724 "strip_size_kb": 64, 00:15:25.724 "state": "online", 00:15:25.724 "raid_level": "raid5f", 00:15:25.724 "superblock": false, 00:15:25.724 "num_base_bdevs": 4, 00:15:25.724 "num_base_bdevs_discovered": 4, 00:15:25.724 "num_base_bdevs_operational": 4, 00:15:25.724 "base_bdevs_list": [ 00:15:25.724 { 00:15:25.724 "name": "BaseBdev1", 00:15:25.724 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:25.724 "is_configured": true, 00:15:25.724 "data_offset": 0, 00:15:25.724 "data_size": 65536 00:15:25.724 }, 00:15:25.724 { 00:15:25.724 "name": "BaseBdev2", 00:15:25.724 "uuid": "d49dc6a1-3400-4608-acfb-b70ec0ed6430", 00:15:25.724 "is_configured": true, 00:15:25.724 "data_offset": 0, 00:15:25.724 "data_size": 65536 00:15:25.724 }, 00:15:25.724 { 00:15:25.724 "name": "BaseBdev3", 00:15:25.724 "uuid": "66b4f336-1966-49eb-959e-ae70f4f48216", 00:15:25.724 "is_configured": true, 00:15:25.724 "data_offset": 0, 00:15:25.724 "data_size": 65536 00:15:25.724 }, 00:15:25.724 { 00:15:25.724 "name": "BaseBdev4", 00:15:25.724 "uuid": "76fbf668-a4cc-48b4-81b7-a84c72b800a7", 00:15:25.724 "is_configured": true, 00:15:25.724 "data_offset": 0, 00:15:25.724 "data_size": 65536 00:15:25.724 } 00:15:25.724 ] 00:15:25.724 }' 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.724 15:24:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:25.985 [2024-11-10 15:24:32.314940] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:25.985 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.245 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:26.245 "name": "Existed_Raid", 00:15:26.245 "aliases": [ 00:15:26.245 "ffbc9948-7419-4274-a1c6-304f18878af0" 00:15:26.245 ], 00:15:26.245 "product_name": "Raid Volume", 00:15:26.245 "block_size": 512, 00:15:26.245 "num_blocks": 196608, 00:15:26.246 "uuid": "ffbc9948-7419-4274-a1c6-304f18878af0", 00:15:26.246 "assigned_rate_limits": { 00:15:26.246 "rw_ios_per_sec": 0, 00:15:26.246 "rw_mbytes_per_sec": 0, 00:15:26.246 "r_mbytes_per_sec": 0, 00:15:26.246 "w_mbytes_per_sec": 0 00:15:26.246 }, 00:15:26.246 "claimed": false, 00:15:26.246 "zoned": false, 00:15:26.246 "supported_io_types": { 00:15:26.246 "read": true, 00:15:26.246 "write": true, 00:15:26.246 "unmap": false, 00:15:26.246 "flush": false, 00:15:26.246 "reset": true, 00:15:26.246 "nvme_admin": false, 00:15:26.246 "nvme_io": false, 00:15:26.246 "nvme_io_md": false, 00:15:26.246 "write_zeroes": true, 00:15:26.246 "zcopy": false, 00:15:26.246 "get_zone_info": false, 00:15:26.246 "zone_management": false, 00:15:26.246 "zone_append": false, 00:15:26.246 "compare": false, 00:15:26.246 "compare_and_write": false, 00:15:26.246 "abort": false, 00:15:26.246 "seek_hole": false, 00:15:26.246 "seek_data": false, 00:15:26.246 "copy": false, 00:15:26.246 "nvme_iov_md": false 00:15:26.246 }, 00:15:26.246 "driver_specific": { 00:15:26.246 "raid": { 00:15:26.246 "uuid": "ffbc9948-7419-4274-a1c6-304f18878af0", 00:15:26.246 "strip_size_kb": 64, 00:15:26.246 "state": "online", 00:15:26.246 "raid_level": "raid5f", 00:15:26.246 "superblock": false, 00:15:26.246 "num_base_bdevs": 4, 00:15:26.246 "num_base_bdevs_discovered": 4, 00:15:26.246 "num_base_bdevs_operational": 4, 00:15:26.246 "base_bdevs_list": [ 00:15:26.246 { 00:15:26.246 "name": "BaseBdev1", 00:15:26.246 "uuid": "c8eeed71-bb6a-4142-b3ca-fe4746532488", 00:15:26.246 "is_configured": true, 00:15:26.246 "data_offset": 0, 00:15:26.246 "data_size": 65536 00:15:26.246 }, 00:15:26.246 { 00:15:26.246 "name": "BaseBdev2", 00:15:26.246 "uuid": "d49dc6a1-3400-4608-acfb-b70ec0ed6430", 00:15:26.246 "is_configured": true, 00:15:26.246 "data_offset": 0, 00:15:26.246 "data_size": 65536 00:15:26.246 }, 00:15:26.246 { 00:15:26.246 "name": "BaseBdev3", 00:15:26.246 "uuid": "66b4f336-1966-49eb-959e-ae70f4f48216", 00:15:26.246 "is_configured": true, 00:15:26.246 "data_offset": 0, 00:15:26.246 "data_size": 65536 00:15:26.246 }, 00:15:26.246 { 00:15:26.246 "name": "BaseBdev4", 00:15:26.246 "uuid": "76fbf668-a4cc-48b4-81b7-a84c72b800a7", 00:15:26.246 "is_configured": true, 00:15:26.246 "data_offset": 0, 00:15:26.246 "data_size": 65536 00:15:26.246 } 00:15:26.246 ] 00:15:26.246 } 00:15:26.246 } 00:15:26.246 }' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:26.246 BaseBdev2 00:15:26.246 BaseBdev3 00:15:26.246 BaseBdev4' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.246 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.507 [2024-11-10 15:24:32.634918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.507 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.508 "name": "Existed_Raid", 00:15:26.508 "uuid": "ffbc9948-7419-4274-a1c6-304f18878af0", 00:15:26.508 "strip_size_kb": 64, 00:15:26.508 "state": "online", 00:15:26.508 "raid_level": "raid5f", 00:15:26.508 "superblock": false, 00:15:26.508 "num_base_bdevs": 4, 00:15:26.508 "num_base_bdevs_discovered": 3, 00:15:26.508 "num_base_bdevs_operational": 3, 00:15:26.508 "base_bdevs_list": [ 00:15:26.508 { 00:15:26.508 "name": null, 00:15:26.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.508 "is_configured": false, 00:15:26.508 "data_offset": 0, 00:15:26.508 "data_size": 65536 00:15:26.508 }, 00:15:26.508 { 00:15:26.508 "name": "BaseBdev2", 00:15:26.508 "uuid": "d49dc6a1-3400-4608-acfb-b70ec0ed6430", 00:15:26.508 "is_configured": true, 00:15:26.508 "data_offset": 0, 00:15:26.508 "data_size": 65536 00:15:26.508 }, 00:15:26.508 { 00:15:26.508 "name": "BaseBdev3", 00:15:26.508 "uuid": "66b4f336-1966-49eb-959e-ae70f4f48216", 00:15:26.508 "is_configured": true, 00:15:26.508 "data_offset": 0, 00:15:26.508 "data_size": 65536 00:15:26.508 }, 00:15:26.508 { 00:15:26.508 "name": "BaseBdev4", 00:15:26.508 "uuid": "76fbf668-a4cc-48b4-81b7-a84c72b800a7", 00:15:26.508 "is_configured": true, 00:15:26.508 "data_offset": 0, 00:15:26.508 "data_size": 65536 00:15:26.508 } 00:15:26.508 ] 00:15:26.508 }' 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.508 15:24:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.768 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.028 [2024-11-10 15:24:33.139803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.028 [2024-11-10 15:24:33.139979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:27.028 [2024-11-10 15:24:33.160512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.028 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.029 [2024-11-10 15:24:33.216545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.029 [2024-11-10 15:24:33.293354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:27.029 [2024-11-10 15:24:33.293407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.029 BaseBdev2 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.029 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 [ 00:15:27.290 { 00:15:27.290 "name": "BaseBdev2", 00:15:27.290 "aliases": [ 00:15:27.290 "de3639a1-7973-46cf-b1ed-363dad1c3ad2" 00:15:27.290 ], 00:15:27.290 "product_name": "Malloc disk", 00:15:27.290 "block_size": 512, 00:15:27.290 "num_blocks": 65536, 00:15:27.290 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:27.290 "assigned_rate_limits": { 00:15:27.290 "rw_ios_per_sec": 0, 00:15:27.290 "rw_mbytes_per_sec": 0, 00:15:27.290 "r_mbytes_per_sec": 0, 00:15:27.290 "w_mbytes_per_sec": 0 00:15:27.290 }, 00:15:27.290 "claimed": false, 00:15:27.290 "zoned": false, 00:15:27.290 "supported_io_types": { 00:15:27.290 "read": true, 00:15:27.290 "write": true, 00:15:27.290 "unmap": true, 00:15:27.290 "flush": true, 00:15:27.290 "reset": true, 00:15:27.290 "nvme_admin": false, 00:15:27.290 "nvme_io": false, 00:15:27.290 "nvme_io_md": false, 00:15:27.290 "write_zeroes": true, 00:15:27.290 "zcopy": true, 00:15:27.290 "get_zone_info": false, 00:15:27.290 "zone_management": false, 00:15:27.290 "zone_append": false, 00:15:27.290 "compare": false, 00:15:27.290 "compare_and_write": false, 00:15:27.290 "abort": true, 00:15:27.290 "seek_hole": false, 00:15:27.290 "seek_data": false, 00:15:27.290 "copy": true, 00:15:27.290 "nvme_iov_md": false 00:15:27.290 }, 00:15:27.290 "memory_domains": [ 00:15:27.290 { 00:15:27.290 "dma_device_id": "system", 00:15:27.290 "dma_device_type": 1 00:15:27.290 }, 00:15:27.290 { 00:15:27.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.290 "dma_device_type": 2 00:15:27.290 } 00:15:27.290 ], 00:15:27.290 "driver_specific": {} 00:15:27.290 } 00:15:27.290 ] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 BaseBdev3 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 [ 00:15:27.290 { 00:15:27.290 "name": "BaseBdev3", 00:15:27.290 "aliases": [ 00:15:27.290 "e396e539-7a63-4115-9086-70efbd517747" 00:15:27.290 ], 00:15:27.290 "product_name": "Malloc disk", 00:15:27.290 "block_size": 512, 00:15:27.290 "num_blocks": 65536, 00:15:27.290 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:27.290 "assigned_rate_limits": { 00:15:27.290 "rw_ios_per_sec": 0, 00:15:27.290 "rw_mbytes_per_sec": 0, 00:15:27.290 "r_mbytes_per_sec": 0, 00:15:27.290 "w_mbytes_per_sec": 0 00:15:27.290 }, 00:15:27.290 "claimed": false, 00:15:27.290 "zoned": false, 00:15:27.290 "supported_io_types": { 00:15:27.290 "read": true, 00:15:27.290 "write": true, 00:15:27.290 "unmap": true, 00:15:27.290 "flush": true, 00:15:27.290 "reset": true, 00:15:27.290 "nvme_admin": false, 00:15:27.290 "nvme_io": false, 00:15:27.290 "nvme_io_md": false, 00:15:27.290 "write_zeroes": true, 00:15:27.290 "zcopy": true, 00:15:27.290 "get_zone_info": false, 00:15:27.290 "zone_management": false, 00:15:27.290 "zone_append": false, 00:15:27.290 "compare": false, 00:15:27.290 "compare_and_write": false, 00:15:27.290 "abort": true, 00:15:27.290 "seek_hole": false, 00:15:27.290 "seek_data": false, 00:15:27.290 "copy": true, 00:15:27.290 "nvme_iov_md": false 00:15:27.290 }, 00:15:27.290 "memory_domains": [ 00:15:27.290 { 00:15:27.290 "dma_device_id": "system", 00:15:27.290 "dma_device_type": 1 00:15:27.290 }, 00:15:27.290 { 00:15:27.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.290 "dma_device_type": 2 00:15:27.290 } 00:15:27.290 ], 00:15:27.290 "driver_specific": {} 00:15:27.290 } 00:15:27.290 ] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 BaseBdev4 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:27.290 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.291 [ 00:15:27.291 { 00:15:27.291 "name": "BaseBdev4", 00:15:27.291 "aliases": [ 00:15:27.291 "9911e1f5-943a-481b-99fc-cdb1f176259b" 00:15:27.291 ], 00:15:27.291 "product_name": "Malloc disk", 00:15:27.291 "block_size": 512, 00:15:27.291 "num_blocks": 65536, 00:15:27.291 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:27.291 "assigned_rate_limits": { 00:15:27.291 "rw_ios_per_sec": 0, 00:15:27.291 "rw_mbytes_per_sec": 0, 00:15:27.291 "r_mbytes_per_sec": 0, 00:15:27.291 "w_mbytes_per_sec": 0 00:15:27.291 }, 00:15:27.291 "claimed": false, 00:15:27.291 "zoned": false, 00:15:27.291 "supported_io_types": { 00:15:27.291 "read": true, 00:15:27.291 "write": true, 00:15:27.291 "unmap": true, 00:15:27.291 "flush": true, 00:15:27.291 "reset": true, 00:15:27.291 "nvme_admin": false, 00:15:27.291 "nvme_io": false, 00:15:27.291 "nvme_io_md": false, 00:15:27.291 "write_zeroes": true, 00:15:27.291 "zcopy": true, 00:15:27.291 "get_zone_info": false, 00:15:27.291 "zone_management": false, 00:15:27.291 "zone_append": false, 00:15:27.291 "compare": false, 00:15:27.291 "compare_and_write": false, 00:15:27.291 "abort": true, 00:15:27.291 "seek_hole": false, 00:15:27.291 "seek_data": false, 00:15:27.291 "copy": true, 00:15:27.291 "nvme_iov_md": false 00:15:27.291 }, 00:15:27.291 "memory_domains": [ 00:15:27.291 { 00:15:27.291 "dma_device_id": "system", 00:15:27.291 "dma_device_type": 1 00:15:27.291 }, 00:15:27.291 { 00:15:27.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.291 "dma_device_type": 2 00:15:27.291 } 00:15:27.291 ], 00:15:27.291 "driver_specific": {} 00:15:27.291 } 00:15:27.291 ] 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.291 [2024-11-10 15:24:33.543265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:27.291 [2024-11-10 15:24:33.543409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:27.291 [2024-11-10 15:24:33.543452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.291 [2024-11-10 15:24:33.545645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.291 [2024-11-10 15:24:33.545735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.291 "name": "Existed_Raid", 00:15:27.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.291 "strip_size_kb": 64, 00:15:27.291 "state": "configuring", 00:15:27.291 "raid_level": "raid5f", 00:15:27.291 "superblock": false, 00:15:27.291 "num_base_bdevs": 4, 00:15:27.291 "num_base_bdevs_discovered": 3, 00:15:27.291 "num_base_bdevs_operational": 4, 00:15:27.291 "base_bdevs_list": [ 00:15:27.291 { 00:15:27.291 "name": "BaseBdev1", 00:15:27.291 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.291 "is_configured": false, 00:15:27.291 "data_offset": 0, 00:15:27.291 "data_size": 0 00:15:27.291 }, 00:15:27.291 { 00:15:27.291 "name": "BaseBdev2", 00:15:27.291 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:27.291 "is_configured": true, 00:15:27.291 "data_offset": 0, 00:15:27.291 "data_size": 65536 00:15:27.291 }, 00:15:27.291 { 00:15:27.291 "name": "BaseBdev3", 00:15:27.291 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:27.291 "is_configured": true, 00:15:27.291 "data_offset": 0, 00:15:27.291 "data_size": 65536 00:15:27.291 }, 00:15:27.291 { 00:15:27.291 "name": "BaseBdev4", 00:15:27.291 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:27.291 "is_configured": true, 00:15:27.291 "data_offset": 0, 00:15:27.291 "data_size": 65536 00:15:27.291 } 00:15:27.291 ] 00:15:27.291 }' 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.291 15:24:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.862 [2024-11-10 15:24:34.047381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.862 "name": "Existed_Raid", 00:15:27.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.862 "strip_size_kb": 64, 00:15:27.862 "state": "configuring", 00:15:27.862 "raid_level": "raid5f", 00:15:27.862 "superblock": false, 00:15:27.862 "num_base_bdevs": 4, 00:15:27.862 "num_base_bdevs_discovered": 2, 00:15:27.862 "num_base_bdevs_operational": 4, 00:15:27.862 "base_bdevs_list": [ 00:15:27.862 { 00:15:27.862 "name": "BaseBdev1", 00:15:27.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.862 "is_configured": false, 00:15:27.862 "data_offset": 0, 00:15:27.862 "data_size": 0 00:15:27.862 }, 00:15:27.862 { 00:15:27.862 "name": null, 00:15:27.862 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:27.862 "is_configured": false, 00:15:27.862 "data_offset": 0, 00:15:27.862 "data_size": 65536 00:15:27.862 }, 00:15:27.862 { 00:15:27.862 "name": "BaseBdev3", 00:15:27.862 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:27.862 "is_configured": true, 00:15:27.862 "data_offset": 0, 00:15:27.862 "data_size": 65536 00:15:27.862 }, 00:15:27.862 { 00:15:27.862 "name": "BaseBdev4", 00:15:27.862 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:27.862 "is_configured": true, 00:15:27.862 "data_offset": 0, 00:15:27.862 "data_size": 65536 00:15:27.862 } 00:15:27.862 ] 00:15:27.862 }' 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.862 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 [2024-11-10 15:24:34.556232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:28.433 BaseBdev1 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 [ 00:15:28.433 { 00:15:28.433 "name": "BaseBdev1", 00:15:28.433 "aliases": [ 00:15:28.433 "56a6d070-8b4d-490a-bb03-f33d45d632cd" 00:15:28.433 ], 00:15:28.433 "product_name": "Malloc disk", 00:15:28.433 "block_size": 512, 00:15:28.433 "num_blocks": 65536, 00:15:28.433 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:28.433 "assigned_rate_limits": { 00:15:28.433 "rw_ios_per_sec": 0, 00:15:28.433 "rw_mbytes_per_sec": 0, 00:15:28.433 "r_mbytes_per_sec": 0, 00:15:28.433 "w_mbytes_per_sec": 0 00:15:28.433 }, 00:15:28.433 "claimed": true, 00:15:28.433 "claim_type": "exclusive_write", 00:15:28.433 "zoned": false, 00:15:28.433 "supported_io_types": { 00:15:28.433 "read": true, 00:15:28.433 "write": true, 00:15:28.433 "unmap": true, 00:15:28.433 "flush": true, 00:15:28.433 "reset": true, 00:15:28.433 "nvme_admin": false, 00:15:28.433 "nvme_io": false, 00:15:28.433 "nvme_io_md": false, 00:15:28.433 "write_zeroes": true, 00:15:28.433 "zcopy": true, 00:15:28.433 "get_zone_info": false, 00:15:28.433 "zone_management": false, 00:15:28.433 "zone_append": false, 00:15:28.433 "compare": false, 00:15:28.433 "compare_and_write": false, 00:15:28.433 "abort": true, 00:15:28.433 "seek_hole": false, 00:15:28.433 "seek_data": false, 00:15:28.433 "copy": true, 00:15:28.433 "nvme_iov_md": false 00:15:28.433 }, 00:15:28.433 "memory_domains": [ 00:15:28.433 { 00:15:28.433 "dma_device_id": "system", 00:15:28.433 "dma_device_type": 1 00:15:28.433 }, 00:15:28.433 { 00:15:28.433 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.433 "dma_device_type": 2 00:15:28.433 } 00:15:28.433 ], 00:15:28.433 "driver_specific": {} 00:15:28.433 } 00:15:28.433 ] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.433 "name": "Existed_Raid", 00:15:28.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.433 "strip_size_kb": 64, 00:15:28.433 "state": "configuring", 00:15:28.433 "raid_level": "raid5f", 00:15:28.433 "superblock": false, 00:15:28.433 "num_base_bdevs": 4, 00:15:28.433 "num_base_bdevs_discovered": 3, 00:15:28.433 "num_base_bdevs_operational": 4, 00:15:28.433 "base_bdevs_list": [ 00:15:28.433 { 00:15:28.433 "name": "BaseBdev1", 00:15:28.433 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:28.433 "is_configured": true, 00:15:28.433 "data_offset": 0, 00:15:28.433 "data_size": 65536 00:15:28.433 }, 00:15:28.433 { 00:15:28.433 "name": null, 00:15:28.433 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:28.433 "is_configured": false, 00:15:28.433 "data_offset": 0, 00:15:28.433 "data_size": 65536 00:15:28.433 }, 00:15:28.433 { 00:15:28.433 "name": "BaseBdev3", 00:15:28.433 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:28.433 "is_configured": true, 00:15:28.433 "data_offset": 0, 00:15:28.433 "data_size": 65536 00:15:28.433 }, 00:15:28.433 { 00:15:28.433 "name": "BaseBdev4", 00:15:28.433 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:28.433 "is_configured": true, 00:15:28.433 "data_offset": 0, 00:15:28.433 "data_size": 65536 00:15:28.433 } 00:15:28.433 ] 00:15:28.433 }' 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.433 15:24:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.003 [2024-11-10 15:24:35.124416] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.003 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.003 "name": "Existed_Raid", 00:15:29.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.003 "strip_size_kb": 64, 00:15:29.003 "state": "configuring", 00:15:29.003 "raid_level": "raid5f", 00:15:29.003 "superblock": false, 00:15:29.003 "num_base_bdevs": 4, 00:15:29.003 "num_base_bdevs_discovered": 2, 00:15:29.003 "num_base_bdevs_operational": 4, 00:15:29.003 "base_bdevs_list": [ 00:15:29.003 { 00:15:29.003 "name": "BaseBdev1", 00:15:29.004 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:29.004 "is_configured": true, 00:15:29.004 "data_offset": 0, 00:15:29.004 "data_size": 65536 00:15:29.004 }, 00:15:29.004 { 00:15:29.004 "name": null, 00:15:29.004 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:29.004 "is_configured": false, 00:15:29.004 "data_offset": 0, 00:15:29.004 "data_size": 65536 00:15:29.004 }, 00:15:29.004 { 00:15:29.004 "name": null, 00:15:29.004 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:29.004 "is_configured": false, 00:15:29.004 "data_offset": 0, 00:15:29.004 "data_size": 65536 00:15:29.004 }, 00:15:29.004 { 00:15:29.004 "name": "BaseBdev4", 00:15:29.004 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:29.004 "is_configured": true, 00:15:29.004 "data_offset": 0, 00:15:29.004 "data_size": 65536 00:15:29.004 } 00:15:29.004 ] 00:15:29.004 }' 00:15:29.004 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.004 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.264 [2024-11-10 15:24:35.588583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.264 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.523 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.523 "name": "Existed_Raid", 00:15:29.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.523 "strip_size_kb": 64, 00:15:29.523 "state": "configuring", 00:15:29.523 "raid_level": "raid5f", 00:15:29.523 "superblock": false, 00:15:29.523 "num_base_bdevs": 4, 00:15:29.523 "num_base_bdevs_discovered": 3, 00:15:29.523 "num_base_bdevs_operational": 4, 00:15:29.523 "base_bdevs_list": [ 00:15:29.523 { 00:15:29.523 "name": "BaseBdev1", 00:15:29.523 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:29.523 "is_configured": true, 00:15:29.523 "data_offset": 0, 00:15:29.523 "data_size": 65536 00:15:29.523 }, 00:15:29.523 { 00:15:29.523 "name": null, 00:15:29.523 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:29.523 "is_configured": false, 00:15:29.523 "data_offset": 0, 00:15:29.523 "data_size": 65536 00:15:29.523 }, 00:15:29.523 { 00:15:29.523 "name": "BaseBdev3", 00:15:29.523 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:29.523 "is_configured": true, 00:15:29.523 "data_offset": 0, 00:15:29.523 "data_size": 65536 00:15:29.523 }, 00:15:29.523 { 00:15:29.523 "name": "BaseBdev4", 00:15:29.523 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:29.523 "is_configured": true, 00:15:29.523 "data_offset": 0, 00:15:29.523 "data_size": 65536 00:15:29.523 } 00:15:29.523 ] 00:15:29.523 }' 00:15:29.523 15:24:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.523 15:24:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.785 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.785 [2024-11-10 15:24:36.128751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.045 "name": "Existed_Raid", 00:15:30.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.045 "strip_size_kb": 64, 00:15:30.045 "state": "configuring", 00:15:30.045 "raid_level": "raid5f", 00:15:30.045 "superblock": false, 00:15:30.045 "num_base_bdevs": 4, 00:15:30.045 "num_base_bdevs_discovered": 2, 00:15:30.045 "num_base_bdevs_operational": 4, 00:15:30.045 "base_bdevs_list": [ 00:15:30.045 { 00:15:30.045 "name": null, 00:15:30.045 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:30.045 "is_configured": false, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": null, 00:15:30.045 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:30.045 "is_configured": false, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev3", 00:15:30.045 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 }, 00:15:30.045 { 00:15:30.045 "name": "BaseBdev4", 00:15:30.045 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:30.045 "is_configured": true, 00:15:30.045 "data_offset": 0, 00:15:30.045 "data_size": 65536 00:15:30.045 } 00:15:30.045 ] 00:15:30.045 }' 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.045 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.320 [2024-11-10 15:24:36.616812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.320 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.630 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.630 "name": "Existed_Raid", 00:15:30.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.630 "strip_size_kb": 64, 00:15:30.630 "state": "configuring", 00:15:30.630 "raid_level": "raid5f", 00:15:30.630 "superblock": false, 00:15:30.630 "num_base_bdevs": 4, 00:15:30.630 "num_base_bdevs_discovered": 3, 00:15:30.630 "num_base_bdevs_operational": 4, 00:15:30.630 "base_bdevs_list": [ 00:15:30.630 { 00:15:30.630 "name": null, 00:15:30.630 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:30.630 "is_configured": false, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev2", 00:15:30.630 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev3", 00:15:30.630 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 }, 00:15:30.630 { 00:15:30.630 "name": "BaseBdev4", 00:15:30.630 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:30.630 "is_configured": true, 00:15:30.630 "data_offset": 0, 00:15:30.630 "data_size": 65536 00:15:30.630 } 00:15:30.630 ] 00:15:30.630 }' 00:15:30.630 15:24:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.630 15:24:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 56a6d070-8b4d-490a-bb03-f33d45d632cd 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 [2024-11-10 15:24:37.209539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:30.918 [2024-11-10 15:24:37.209587] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:30.918 [2024-11-10 15:24:37.209599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:30.918 [2024-11-10 15:24:37.209847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:15:30.918 [2024-11-10 15:24:37.210373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:30.918 [2024-11-10 15:24:37.210392] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:30.918 [2024-11-10 15:24:37.210614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.918 NewBaseBdev 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local i 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 [ 00:15:30.918 { 00:15:30.918 "name": "NewBaseBdev", 00:15:30.918 "aliases": [ 00:15:30.918 "56a6d070-8b4d-490a-bb03-f33d45d632cd" 00:15:30.918 ], 00:15:30.918 "product_name": "Malloc disk", 00:15:30.918 "block_size": 512, 00:15:30.918 "num_blocks": 65536, 00:15:30.918 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:30.918 "assigned_rate_limits": { 00:15:30.918 "rw_ios_per_sec": 0, 00:15:30.918 "rw_mbytes_per_sec": 0, 00:15:30.918 "r_mbytes_per_sec": 0, 00:15:30.918 "w_mbytes_per_sec": 0 00:15:30.918 }, 00:15:30.918 "claimed": true, 00:15:30.918 "claim_type": "exclusive_write", 00:15:30.918 "zoned": false, 00:15:30.918 "supported_io_types": { 00:15:30.918 "read": true, 00:15:30.918 "write": true, 00:15:30.918 "unmap": true, 00:15:30.918 "flush": true, 00:15:30.918 "reset": true, 00:15:30.918 "nvme_admin": false, 00:15:30.918 "nvme_io": false, 00:15:30.918 "nvme_io_md": false, 00:15:30.918 "write_zeroes": true, 00:15:30.918 "zcopy": true, 00:15:30.918 "get_zone_info": false, 00:15:30.918 "zone_management": false, 00:15:30.918 "zone_append": false, 00:15:30.918 "compare": false, 00:15:30.918 "compare_and_write": false, 00:15:30.918 "abort": true, 00:15:30.918 "seek_hole": false, 00:15:30.918 "seek_data": false, 00:15:30.918 "copy": true, 00:15:30.918 "nvme_iov_md": false 00:15:30.918 }, 00:15:30.918 "memory_domains": [ 00:15:30.918 { 00:15:30.918 "dma_device_id": "system", 00:15:30.918 "dma_device_type": 1 00:15:30.918 }, 00:15:30.918 { 00:15:30.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.918 "dma_device_type": 2 00:15:30.918 } 00:15:30.918 ], 00:15:30.918 "driver_specific": {} 00:15:30.918 } 00:15:30.918 ] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@909 -- # return 0 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.918 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.178 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.178 "name": "Existed_Raid", 00:15:31.178 "uuid": "f0e18ed8-3769-4427-88ad-ef2506b5b5dd", 00:15:31.178 "strip_size_kb": 64, 00:15:31.178 "state": "online", 00:15:31.178 "raid_level": "raid5f", 00:15:31.178 "superblock": false, 00:15:31.178 "num_base_bdevs": 4, 00:15:31.178 "num_base_bdevs_discovered": 4, 00:15:31.178 "num_base_bdevs_operational": 4, 00:15:31.178 "base_bdevs_list": [ 00:15:31.178 { 00:15:31.178 "name": "NewBaseBdev", 00:15:31.178 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:31.178 "is_configured": true, 00:15:31.178 "data_offset": 0, 00:15:31.178 "data_size": 65536 00:15:31.178 }, 00:15:31.178 { 00:15:31.178 "name": "BaseBdev2", 00:15:31.178 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:31.178 "is_configured": true, 00:15:31.178 "data_offset": 0, 00:15:31.178 "data_size": 65536 00:15:31.178 }, 00:15:31.178 { 00:15:31.178 "name": "BaseBdev3", 00:15:31.178 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:31.178 "is_configured": true, 00:15:31.178 "data_offset": 0, 00:15:31.178 "data_size": 65536 00:15:31.178 }, 00:15:31.178 { 00:15:31.178 "name": "BaseBdev4", 00:15:31.178 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:31.178 "is_configured": true, 00:15:31.178 "data_offset": 0, 00:15:31.178 "data_size": 65536 00:15:31.178 } 00:15:31.178 ] 00:15:31.178 }' 00:15:31.178 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.178 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.439 [2024-11-10 15:24:37.657835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:31.439 "name": "Existed_Raid", 00:15:31.439 "aliases": [ 00:15:31.439 "f0e18ed8-3769-4427-88ad-ef2506b5b5dd" 00:15:31.439 ], 00:15:31.439 "product_name": "Raid Volume", 00:15:31.439 "block_size": 512, 00:15:31.439 "num_blocks": 196608, 00:15:31.439 "uuid": "f0e18ed8-3769-4427-88ad-ef2506b5b5dd", 00:15:31.439 "assigned_rate_limits": { 00:15:31.439 "rw_ios_per_sec": 0, 00:15:31.439 "rw_mbytes_per_sec": 0, 00:15:31.439 "r_mbytes_per_sec": 0, 00:15:31.439 "w_mbytes_per_sec": 0 00:15:31.439 }, 00:15:31.439 "claimed": false, 00:15:31.439 "zoned": false, 00:15:31.439 "supported_io_types": { 00:15:31.439 "read": true, 00:15:31.439 "write": true, 00:15:31.439 "unmap": false, 00:15:31.439 "flush": false, 00:15:31.439 "reset": true, 00:15:31.439 "nvme_admin": false, 00:15:31.439 "nvme_io": false, 00:15:31.439 "nvme_io_md": false, 00:15:31.439 "write_zeroes": true, 00:15:31.439 "zcopy": false, 00:15:31.439 "get_zone_info": false, 00:15:31.439 "zone_management": false, 00:15:31.439 "zone_append": false, 00:15:31.439 "compare": false, 00:15:31.439 "compare_and_write": false, 00:15:31.439 "abort": false, 00:15:31.439 "seek_hole": false, 00:15:31.439 "seek_data": false, 00:15:31.439 "copy": false, 00:15:31.439 "nvme_iov_md": false 00:15:31.439 }, 00:15:31.439 "driver_specific": { 00:15:31.439 "raid": { 00:15:31.439 "uuid": "f0e18ed8-3769-4427-88ad-ef2506b5b5dd", 00:15:31.439 "strip_size_kb": 64, 00:15:31.439 "state": "online", 00:15:31.439 "raid_level": "raid5f", 00:15:31.439 "superblock": false, 00:15:31.439 "num_base_bdevs": 4, 00:15:31.439 "num_base_bdevs_discovered": 4, 00:15:31.439 "num_base_bdevs_operational": 4, 00:15:31.439 "base_bdevs_list": [ 00:15:31.439 { 00:15:31.439 "name": "NewBaseBdev", 00:15:31.439 "uuid": "56a6d070-8b4d-490a-bb03-f33d45d632cd", 00:15:31.439 "is_configured": true, 00:15:31.439 "data_offset": 0, 00:15:31.439 "data_size": 65536 00:15:31.439 }, 00:15:31.439 { 00:15:31.439 "name": "BaseBdev2", 00:15:31.439 "uuid": "de3639a1-7973-46cf-b1ed-363dad1c3ad2", 00:15:31.439 "is_configured": true, 00:15:31.439 "data_offset": 0, 00:15:31.439 "data_size": 65536 00:15:31.439 }, 00:15:31.439 { 00:15:31.439 "name": "BaseBdev3", 00:15:31.439 "uuid": "e396e539-7a63-4115-9086-70efbd517747", 00:15:31.439 "is_configured": true, 00:15:31.439 "data_offset": 0, 00:15:31.439 "data_size": 65536 00:15:31.439 }, 00:15:31.439 { 00:15:31.439 "name": "BaseBdev4", 00:15:31.439 "uuid": "9911e1f5-943a-481b-99fc-cdb1f176259b", 00:15:31.439 "is_configured": true, 00:15:31.439 "data_offset": 0, 00:15:31.439 "data_size": 65536 00:15:31.439 } 00:15:31.439 ] 00:15:31.439 } 00:15:31.439 } 00:15:31.439 }' 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:31.439 BaseBdev2 00:15:31.439 BaseBdev3 00:15:31.439 BaseBdev4' 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.439 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.700 [2024-11-10 15:24:37.989750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:31.700 [2024-11-10 15:24:37.989775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.700 [2024-11-10 15:24:37.989849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.700 [2024-11-10 15:24:37.990139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.700 [2024-11-10 15:24:37.990159] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 94642 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@952 -- # '[' -z 94642 ']' 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # kill -0 94642 00:15:31.700 15:24:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # uname 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 94642 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 94642' 00:15:31.700 killing process with pid 94642 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@971 -- # kill 94642 00:15:31.700 [2024-11-10 15:24:38.042348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:31.700 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@976 -- # wait 94642 00:15:31.961 [2024-11-10 15:24:38.118981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:32.221 00:15:32.221 real 0m10.063s 00:15:32.221 user 0m16.858s 00:15:32.221 sys 0m2.342s 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.221 ************************************ 00:15:32.221 END TEST raid5f_state_function_test 00:15:32.221 ************************************ 00:15:32.221 15:24:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:32.221 15:24:38 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:15:32.221 15:24:38 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.221 15:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:32.221 ************************************ 00:15:32.221 START TEST raid5f_state_function_test_sb 00:15:32.221 ************************************ 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1127 -- # raid_state_function_test raid5f 4 true 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.221 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=95297 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:32.222 Process raid pid: 95297 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95297' 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 95297 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@833 -- # '[' -z 95297 ']' 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.222 15:24:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.482 [2024-11-10 15:24:38.641259] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:15:32.482 [2024-11-10 15:24:38.641407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.482 [2024-11-10 15:24:38.782121] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:32.482 [2024-11-10 15:24:38.819377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.742 [2024-11-10 15:24:38.860025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.742 [2024-11-10 15:24:38.938574] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:32.742 [2024-11-10 15:24:38.938610] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@866 -- # return 0 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.312 [2024-11-10 15:24:39.448819] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.312 [2024-11-10 15:24:39.448937] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.312 [2024-11-10 15:24:39.448956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.312 [2024-11-10 15:24:39.448965] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.312 [2024-11-10 15:24:39.448976] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.312 [2024-11-10 15:24:39.448983] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.312 [2024-11-10 15:24:39.448991] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.312 [2024-11-10 15:24:39.448998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.312 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.312 "name": "Existed_Raid", 00:15:33.312 "uuid": "5de4ed7d-60d4-4840-abcc-6046c9c4d412", 00:15:33.312 "strip_size_kb": 64, 00:15:33.312 "state": "configuring", 00:15:33.312 "raid_level": "raid5f", 00:15:33.312 "superblock": true, 00:15:33.313 "num_base_bdevs": 4, 00:15:33.313 "num_base_bdevs_discovered": 0, 00:15:33.313 "num_base_bdevs_operational": 4, 00:15:33.313 "base_bdevs_list": [ 00:15:33.313 { 00:15:33.313 "name": "BaseBdev1", 00:15:33.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.313 "is_configured": false, 00:15:33.313 "data_offset": 0, 00:15:33.313 "data_size": 0 00:15:33.313 }, 00:15:33.313 { 00:15:33.313 "name": "BaseBdev2", 00:15:33.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.313 "is_configured": false, 00:15:33.313 "data_offset": 0, 00:15:33.313 "data_size": 0 00:15:33.313 }, 00:15:33.313 { 00:15:33.313 "name": "BaseBdev3", 00:15:33.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.313 "is_configured": false, 00:15:33.313 "data_offset": 0, 00:15:33.313 "data_size": 0 00:15:33.313 }, 00:15:33.313 { 00:15:33.313 "name": "BaseBdev4", 00:15:33.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.313 "is_configured": false, 00:15:33.313 "data_offset": 0, 00:15:33.313 "data_size": 0 00:15:33.313 } 00:15:33.313 ] 00:15:33.313 }' 00:15:33.313 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.313 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.572 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.572 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.572 [2024-11-10 15:24:39.932805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.572 [2024-11-10 15:24:39.932887] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 [2024-11-10 15:24:39.944844] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:33.832 [2024-11-10 15:24:39.944936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:33.832 [2024-11-10 15:24:39.944965] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:33.832 [2024-11-10 15:24:39.944985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:33.832 [2024-11-10 15:24:39.945005] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:33.832 [2024-11-10 15:24:39.945030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:33.832 [2024-11-10 15:24:39.945067] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:33.832 [2024-11-10 15:24:39.945094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 [2024-11-10 15:24:39.972183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.832 BaseBdev1 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.832 15:24:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.832 [ 00:15:33.832 { 00:15:33.832 "name": "BaseBdev1", 00:15:33.833 "aliases": [ 00:15:33.833 "38f70d49-ca94-4f29-a22f-8323dc08f359" 00:15:33.833 ], 00:15:33.833 "product_name": "Malloc disk", 00:15:33.833 "block_size": 512, 00:15:33.833 "num_blocks": 65536, 00:15:33.833 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:33.833 "assigned_rate_limits": { 00:15:33.833 "rw_ios_per_sec": 0, 00:15:33.833 "rw_mbytes_per_sec": 0, 00:15:33.833 "r_mbytes_per_sec": 0, 00:15:33.833 "w_mbytes_per_sec": 0 00:15:33.833 }, 00:15:33.833 "claimed": true, 00:15:33.833 "claim_type": "exclusive_write", 00:15:33.833 "zoned": false, 00:15:33.833 "supported_io_types": { 00:15:33.833 "read": true, 00:15:33.833 "write": true, 00:15:33.833 "unmap": true, 00:15:33.833 "flush": true, 00:15:33.833 "reset": true, 00:15:33.833 "nvme_admin": false, 00:15:33.833 "nvme_io": false, 00:15:33.833 "nvme_io_md": false, 00:15:33.833 "write_zeroes": true, 00:15:33.833 "zcopy": true, 00:15:33.833 "get_zone_info": false, 00:15:33.833 "zone_management": false, 00:15:33.833 "zone_append": false, 00:15:33.833 "compare": false, 00:15:33.833 "compare_and_write": false, 00:15:33.833 "abort": true, 00:15:33.833 "seek_hole": false, 00:15:33.833 "seek_data": false, 00:15:33.833 "copy": true, 00:15:33.833 "nvme_iov_md": false 00:15:33.833 }, 00:15:33.833 "memory_domains": [ 00:15:33.833 { 00:15:33.833 "dma_device_id": "system", 00:15:33.833 "dma_device_type": 1 00:15:33.833 }, 00:15:33.833 { 00:15:33.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.833 "dma_device_type": 2 00:15:33.833 } 00:15:33.833 ], 00:15:33.833 "driver_specific": {} 00:15:33.833 } 00:15:33.833 ] 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.833 "name": "Existed_Raid", 00:15:33.833 "uuid": "741c6df8-d078-47c5-a66b-8616048f2e3e", 00:15:33.833 "strip_size_kb": 64, 00:15:33.833 "state": "configuring", 00:15:33.833 "raid_level": "raid5f", 00:15:33.833 "superblock": true, 00:15:33.833 "num_base_bdevs": 4, 00:15:33.833 "num_base_bdevs_discovered": 1, 00:15:33.833 "num_base_bdevs_operational": 4, 00:15:33.833 "base_bdevs_list": [ 00:15:33.833 { 00:15:33.833 "name": "BaseBdev1", 00:15:33.833 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:33.833 "is_configured": true, 00:15:33.833 "data_offset": 2048, 00:15:33.833 "data_size": 63488 00:15:33.833 }, 00:15:33.833 { 00:15:33.833 "name": "BaseBdev2", 00:15:33.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.833 "is_configured": false, 00:15:33.833 "data_offset": 0, 00:15:33.833 "data_size": 0 00:15:33.833 }, 00:15:33.833 { 00:15:33.833 "name": "BaseBdev3", 00:15:33.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.833 "is_configured": false, 00:15:33.833 "data_offset": 0, 00:15:33.833 "data_size": 0 00:15:33.833 }, 00:15:33.833 { 00:15:33.833 "name": "BaseBdev4", 00:15:33.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.833 "is_configured": false, 00:15:33.833 "data_offset": 0, 00:15:33.833 "data_size": 0 00:15:33.833 } 00:15:33.833 ] 00:15:33.833 }' 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.833 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.093 [2024-11-10 15:24:40.420293] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.093 [2024-11-10 15:24:40.420419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.093 [2024-11-10 15:24:40.428354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.093 [2024-11-10 15:24:40.430431] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.093 [2024-11-10 15:24:40.430469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.093 [2024-11-10 15:24:40.430480] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.093 [2024-11-10 15:24:40.430503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.093 [2024-11-10 15:24:40.430510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.093 [2024-11-10 15:24:40.430517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.093 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.353 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.353 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.353 "name": "Existed_Raid", 00:15:34.353 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:34.353 "strip_size_kb": 64, 00:15:34.353 "state": "configuring", 00:15:34.353 "raid_level": "raid5f", 00:15:34.353 "superblock": true, 00:15:34.353 "num_base_bdevs": 4, 00:15:34.353 "num_base_bdevs_discovered": 1, 00:15:34.353 "num_base_bdevs_operational": 4, 00:15:34.353 "base_bdevs_list": [ 00:15:34.353 { 00:15:34.353 "name": "BaseBdev1", 00:15:34.353 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:34.353 "is_configured": true, 00:15:34.353 "data_offset": 2048, 00:15:34.353 "data_size": 63488 00:15:34.353 }, 00:15:34.353 { 00:15:34.353 "name": "BaseBdev2", 00:15:34.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.353 "is_configured": false, 00:15:34.353 "data_offset": 0, 00:15:34.353 "data_size": 0 00:15:34.353 }, 00:15:34.353 { 00:15:34.353 "name": "BaseBdev3", 00:15:34.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.353 "is_configured": false, 00:15:34.353 "data_offset": 0, 00:15:34.353 "data_size": 0 00:15:34.353 }, 00:15:34.353 { 00:15:34.353 "name": "BaseBdev4", 00:15:34.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.353 "is_configured": false, 00:15:34.353 "data_offset": 0, 00:15:34.353 "data_size": 0 00:15:34.353 } 00:15:34.353 ] 00:15:34.353 }' 00:15:34.353 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.353 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 [2024-11-10 15:24:40.897347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:34.613 BaseBdev2 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 [ 00:15:34.613 { 00:15:34.613 "name": "BaseBdev2", 00:15:34.613 "aliases": [ 00:15:34.613 "536b35f8-f4e9-4697-a981-b50e0f7e2180" 00:15:34.613 ], 00:15:34.613 "product_name": "Malloc disk", 00:15:34.613 "block_size": 512, 00:15:34.613 "num_blocks": 65536, 00:15:34.613 "uuid": "536b35f8-f4e9-4697-a981-b50e0f7e2180", 00:15:34.613 "assigned_rate_limits": { 00:15:34.613 "rw_ios_per_sec": 0, 00:15:34.613 "rw_mbytes_per_sec": 0, 00:15:34.613 "r_mbytes_per_sec": 0, 00:15:34.613 "w_mbytes_per_sec": 0 00:15:34.613 }, 00:15:34.613 "claimed": true, 00:15:34.613 "claim_type": "exclusive_write", 00:15:34.613 "zoned": false, 00:15:34.613 "supported_io_types": { 00:15:34.613 "read": true, 00:15:34.613 "write": true, 00:15:34.613 "unmap": true, 00:15:34.613 "flush": true, 00:15:34.613 "reset": true, 00:15:34.613 "nvme_admin": false, 00:15:34.613 "nvme_io": false, 00:15:34.613 "nvme_io_md": false, 00:15:34.613 "write_zeroes": true, 00:15:34.613 "zcopy": true, 00:15:34.613 "get_zone_info": false, 00:15:34.613 "zone_management": false, 00:15:34.613 "zone_append": false, 00:15:34.613 "compare": false, 00:15:34.613 "compare_and_write": false, 00:15:34.613 "abort": true, 00:15:34.613 "seek_hole": false, 00:15:34.613 "seek_data": false, 00:15:34.613 "copy": true, 00:15:34.613 "nvme_iov_md": false 00:15:34.613 }, 00:15:34.613 "memory_domains": [ 00:15:34.613 { 00:15:34.613 "dma_device_id": "system", 00:15:34.613 "dma_device_type": 1 00:15:34.613 }, 00:15:34.613 { 00:15:34.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:34.613 "dma_device_type": 2 00:15:34.613 } 00:15:34.613 ], 00:15:34.613 "driver_specific": {} 00:15:34.613 } 00:15:34.613 ] 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.613 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.872 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.872 "name": "Existed_Raid", 00:15:34.872 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:34.872 "strip_size_kb": 64, 00:15:34.872 "state": "configuring", 00:15:34.872 "raid_level": "raid5f", 00:15:34.872 "superblock": true, 00:15:34.872 "num_base_bdevs": 4, 00:15:34.872 "num_base_bdevs_discovered": 2, 00:15:34.872 "num_base_bdevs_operational": 4, 00:15:34.872 "base_bdevs_list": [ 00:15:34.872 { 00:15:34.872 "name": "BaseBdev1", 00:15:34.872 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:34.872 "is_configured": true, 00:15:34.872 "data_offset": 2048, 00:15:34.872 "data_size": 63488 00:15:34.872 }, 00:15:34.872 { 00:15:34.872 "name": "BaseBdev2", 00:15:34.872 "uuid": "536b35f8-f4e9-4697-a981-b50e0f7e2180", 00:15:34.872 "is_configured": true, 00:15:34.872 "data_offset": 2048, 00:15:34.872 "data_size": 63488 00:15:34.872 }, 00:15:34.872 { 00:15:34.872 "name": "BaseBdev3", 00:15:34.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.872 "is_configured": false, 00:15:34.872 "data_offset": 0, 00:15:34.872 "data_size": 0 00:15:34.872 }, 00:15:34.872 { 00:15:34.872 "name": "BaseBdev4", 00:15:34.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.872 "is_configured": false, 00:15:34.872 "data_offset": 0, 00:15:34.872 "data_size": 0 00:15:34.872 } 00:15:34.872 ] 00:15:34.872 }' 00:15:34.872 15:24:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.872 15:24:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 [2024-11-10 15:24:41.445989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:35.133 BaseBdev3 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.133 [ 00:15:35.133 { 00:15:35.133 "name": "BaseBdev3", 00:15:35.133 "aliases": [ 00:15:35.133 "c9437eb8-c053-47a1-b50d-69a6f2186ec7" 00:15:35.133 ], 00:15:35.133 "product_name": "Malloc disk", 00:15:35.133 "block_size": 512, 00:15:35.133 "num_blocks": 65536, 00:15:35.133 "uuid": "c9437eb8-c053-47a1-b50d-69a6f2186ec7", 00:15:35.133 "assigned_rate_limits": { 00:15:35.133 "rw_ios_per_sec": 0, 00:15:35.133 "rw_mbytes_per_sec": 0, 00:15:35.133 "r_mbytes_per_sec": 0, 00:15:35.133 "w_mbytes_per_sec": 0 00:15:35.133 }, 00:15:35.133 "claimed": true, 00:15:35.133 "claim_type": "exclusive_write", 00:15:35.133 "zoned": false, 00:15:35.133 "supported_io_types": { 00:15:35.133 "read": true, 00:15:35.133 "write": true, 00:15:35.133 "unmap": true, 00:15:35.133 "flush": true, 00:15:35.133 "reset": true, 00:15:35.133 "nvme_admin": false, 00:15:35.133 "nvme_io": false, 00:15:35.133 "nvme_io_md": false, 00:15:35.133 "write_zeroes": true, 00:15:35.133 "zcopy": true, 00:15:35.133 "get_zone_info": false, 00:15:35.133 "zone_management": false, 00:15:35.133 "zone_append": false, 00:15:35.133 "compare": false, 00:15:35.133 "compare_and_write": false, 00:15:35.133 "abort": true, 00:15:35.133 "seek_hole": false, 00:15:35.133 "seek_data": false, 00:15:35.133 "copy": true, 00:15:35.133 "nvme_iov_md": false 00:15:35.133 }, 00:15:35.133 "memory_domains": [ 00:15:35.133 { 00:15:35.133 "dma_device_id": "system", 00:15:35.133 "dma_device_type": 1 00:15:35.133 }, 00:15:35.133 { 00:15:35.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.133 "dma_device_type": 2 00:15:35.133 } 00:15:35.133 ], 00:15:35.133 "driver_specific": {} 00:15:35.133 } 00:15:35.133 ] 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.133 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.393 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.393 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.393 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.394 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.394 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.394 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.394 "name": "Existed_Raid", 00:15:35.394 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:35.394 "strip_size_kb": 64, 00:15:35.394 "state": "configuring", 00:15:35.394 "raid_level": "raid5f", 00:15:35.394 "superblock": true, 00:15:35.394 "num_base_bdevs": 4, 00:15:35.394 "num_base_bdevs_discovered": 3, 00:15:35.394 "num_base_bdevs_operational": 4, 00:15:35.394 "base_bdevs_list": [ 00:15:35.394 { 00:15:35.394 "name": "BaseBdev1", 00:15:35.394 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:35.394 "is_configured": true, 00:15:35.394 "data_offset": 2048, 00:15:35.394 "data_size": 63488 00:15:35.394 }, 00:15:35.394 { 00:15:35.394 "name": "BaseBdev2", 00:15:35.394 "uuid": "536b35f8-f4e9-4697-a981-b50e0f7e2180", 00:15:35.394 "is_configured": true, 00:15:35.394 "data_offset": 2048, 00:15:35.394 "data_size": 63488 00:15:35.394 }, 00:15:35.394 { 00:15:35.394 "name": "BaseBdev3", 00:15:35.394 "uuid": "c9437eb8-c053-47a1-b50d-69a6f2186ec7", 00:15:35.394 "is_configured": true, 00:15:35.394 "data_offset": 2048, 00:15:35.394 "data_size": 63488 00:15:35.394 }, 00:15:35.394 { 00:15:35.394 "name": "BaseBdev4", 00:15:35.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.394 "is_configured": false, 00:15:35.394 "data_offset": 0, 00:15:35.394 "data_size": 0 00:15:35.394 } 00:15:35.394 ] 00:15:35.394 }' 00:15:35.394 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.394 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 [2024-11-10 15:24:41.950811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:35.654 [2024-11-10 15:24:41.951154] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:35.654 BaseBdev4 00:15:35.654 [2024-11-10 15:24:41.951229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:35.654 [2024-11-10 15:24:41.951567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:35.654 [2024-11-10 15:24:41.952095] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:35.654 [2024-11-10 15:24:41.952113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:15:35.654 [2024-11-10 15:24:41.952260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.654 [ 00:15:35.654 { 00:15:35.654 "name": "BaseBdev4", 00:15:35.654 "aliases": [ 00:15:35.654 "582f6d37-f45e-4904-8a12-9429ad49e8c6" 00:15:35.654 ], 00:15:35.654 "product_name": "Malloc disk", 00:15:35.654 "block_size": 512, 00:15:35.654 "num_blocks": 65536, 00:15:35.654 "uuid": "582f6d37-f45e-4904-8a12-9429ad49e8c6", 00:15:35.654 "assigned_rate_limits": { 00:15:35.654 "rw_ios_per_sec": 0, 00:15:35.654 "rw_mbytes_per_sec": 0, 00:15:35.654 "r_mbytes_per_sec": 0, 00:15:35.654 "w_mbytes_per_sec": 0 00:15:35.654 }, 00:15:35.654 "claimed": true, 00:15:35.654 "claim_type": "exclusive_write", 00:15:35.654 "zoned": false, 00:15:35.654 "supported_io_types": { 00:15:35.654 "read": true, 00:15:35.654 "write": true, 00:15:35.654 "unmap": true, 00:15:35.654 "flush": true, 00:15:35.654 "reset": true, 00:15:35.654 "nvme_admin": false, 00:15:35.654 "nvme_io": false, 00:15:35.654 "nvme_io_md": false, 00:15:35.654 "write_zeroes": true, 00:15:35.654 "zcopy": true, 00:15:35.654 "get_zone_info": false, 00:15:35.654 "zone_management": false, 00:15:35.654 "zone_append": false, 00:15:35.654 "compare": false, 00:15:35.654 "compare_and_write": false, 00:15:35.654 "abort": true, 00:15:35.654 "seek_hole": false, 00:15:35.654 "seek_data": false, 00:15:35.654 "copy": true, 00:15:35.654 "nvme_iov_md": false 00:15:35.654 }, 00:15:35.654 "memory_domains": [ 00:15:35.654 { 00:15:35.654 "dma_device_id": "system", 00:15:35.654 "dma_device_type": 1 00:15:35.654 }, 00:15:35.654 { 00:15:35.654 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:35.654 "dma_device_type": 2 00:15:35.654 } 00:15:35.654 ], 00:15:35.654 "driver_specific": {} 00:15:35.654 } 00:15:35.654 ] 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:35.654 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.655 15:24:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:35.915 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.915 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.915 "name": "Existed_Raid", 00:15:35.915 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:35.915 "strip_size_kb": 64, 00:15:35.915 "state": "online", 00:15:35.915 "raid_level": "raid5f", 00:15:35.915 "superblock": true, 00:15:35.915 "num_base_bdevs": 4, 00:15:35.915 "num_base_bdevs_discovered": 4, 00:15:35.915 "num_base_bdevs_operational": 4, 00:15:35.915 "base_bdevs_list": [ 00:15:35.915 { 00:15:35.915 "name": "BaseBdev1", 00:15:35.915 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 }, 00:15:35.915 { 00:15:35.915 "name": "BaseBdev2", 00:15:35.915 "uuid": "536b35f8-f4e9-4697-a981-b50e0f7e2180", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 }, 00:15:35.915 { 00:15:35.915 "name": "BaseBdev3", 00:15:35.915 "uuid": "c9437eb8-c053-47a1-b50d-69a6f2186ec7", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 }, 00:15:35.915 { 00:15:35.915 "name": "BaseBdev4", 00:15:35.915 "uuid": "582f6d37-f45e-4904-8a12-9429ad49e8c6", 00:15:35.915 "is_configured": true, 00:15:35.915 "data_offset": 2048, 00:15:35.915 "data_size": 63488 00:15:35.915 } 00:15:35.915 ] 00:15:35.915 }' 00:15:35.915 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.915 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.175 [2024-11-10 15:24:42.447395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.175 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.175 "name": "Existed_Raid", 00:15:36.175 "aliases": [ 00:15:36.175 "55513cbb-32b6-4468-951b-3833630525f4" 00:15:36.175 ], 00:15:36.175 "product_name": "Raid Volume", 00:15:36.175 "block_size": 512, 00:15:36.175 "num_blocks": 190464, 00:15:36.175 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:36.175 "assigned_rate_limits": { 00:15:36.175 "rw_ios_per_sec": 0, 00:15:36.175 "rw_mbytes_per_sec": 0, 00:15:36.175 "r_mbytes_per_sec": 0, 00:15:36.175 "w_mbytes_per_sec": 0 00:15:36.175 }, 00:15:36.175 "claimed": false, 00:15:36.175 "zoned": false, 00:15:36.175 "supported_io_types": { 00:15:36.175 "read": true, 00:15:36.175 "write": true, 00:15:36.175 "unmap": false, 00:15:36.175 "flush": false, 00:15:36.175 "reset": true, 00:15:36.175 "nvme_admin": false, 00:15:36.175 "nvme_io": false, 00:15:36.175 "nvme_io_md": false, 00:15:36.175 "write_zeroes": true, 00:15:36.175 "zcopy": false, 00:15:36.175 "get_zone_info": false, 00:15:36.175 "zone_management": false, 00:15:36.175 "zone_append": false, 00:15:36.175 "compare": false, 00:15:36.175 "compare_and_write": false, 00:15:36.175 "abort": false, 00:15:36.175 "seek_hole": false, 00:15:36.175 "seek_data": false, 00:15:36.175 "copy": false, 00:15:36.175 "nvme_iov_md": false 00:15:36.175 }, 00:15:36.175 "driver_specific": { 00:15:36.175 "raid": { 00:15:36.175 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:36.175 "strip_size_kb": 64, 00:15:36.175 "state": "online", 00:15:36.175 "raid_level": "raid5f", 00:15:36.176 "superblock": true, 00:15:36.176 "num_base_bdevs": 4, 00:15:36.176 "num_base_bdevs_discovered": 4, 00:15:36.176 "num_base_bdevs_operational": 4, 00:15:36.176 "base_bdevs_list": [ 00:15:36.176 { 00:15:36.176 "name": "BaseBdev1", 00:15:36.176 "uuid": "38f70d49-ca94-4f29-a22f-8323dc08f359", 00:15:36.176 "is_configured": true, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 }, 00:15:36.176 { 00:15:36.176 "name": "BaseBdev2", 00:15:36.176 "uuid": "536b35f8-f4e9-4697-a981-b50e0f7e2180", 00:15:36.176 "is_configured": true, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 }, 00:15:36.176 { 00:15:36.176 "name": "BaseBdev3", 00:15:36.176 "uuid": "c9437eb8-c053-47a1-b50d-69a6f2186ec7", 00:15:36.176 "is_configured": true, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 }, 00:15:36.176 { 00:15:36.176 "name": "BaseBdev4", 00:15:36.176 "uuid": "582f6d37-f45e-4904-8a12-9429ad49e8c6", 00:15:36.176 "is_configured": true, 00:15:36.176 "data_offset": 2048, 00:15:36.176 "data_size": 63488 00:15:36.176 } 00:15:36.176 ] 00:15:36.176 } 00:15:36.176 } 00:15:36.176 }' 00:15:36.176 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.176 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:36.176 BaseBdev2 00:15:36.176 BaseBdev3 00:15:36.176 BaseBdev4' 00:15:36.176 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.436 [2024-11-10 15:24:42.735335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:36.436 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.437 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.697 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.697 "name": "Existed_Raid", 00:15:36.697 "uuid": "55513cbb-32b6-4468-951b-3833630525f4", 00:15:36.697 "strip_size_kb": 64, 00:15:36.697 "state": "online", 00:15:36.697 "raid_level": "raid5f", 00:15:36.697 "superblock": true, 00:15:36.697 "num_base_bdevs": 4, 00:15:36.697 "num_base_bdevs_discovered": 3, 00:15:36.697 "num_base_bdevs_operational": 3, 00:15:36.697 "base_bdevs_list": [ 00:15:36.697 { 00:15:36.697 "name": null, 00:15:36.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.697 "is_configured": false, 00:15:36.697 "data_offset": 0, 00:15:36.697 "data_size": 63488 00:15:36.697 }, 00:15:36.697 { 00:15:36.697 "name": "BaseBdev2", 00:15:36.697 "uuid": "536b35f8-f4e9-4697-a981-b50e0f7e2180", 00:15:36.697 "is_configured": true, 00:15:36.697 "data_offset": 2048, 00:15:36.697 "data_size": 63488 00:15:36.697 }, 00:15:36.697 { 00:15:36.697 "name": "BaseBdev3", 00:15:36.697 "uuid": "c9437eb8-c053-47a1-b50d-69a6f2186ec7", 00:15:36.697 "is_configured": true, 00:15:36.697 "data_offset": 2048, 00:15:36.697 "data_size": 63488 00:15:36.697 }, 00:15:36.697 { 00:15:36.697 "name": "BaseBdev4", 00:15:36.697 "uuid": "582f6d37-f45e-4904-8a12-9429ad49e8c6", 00:15:36.697 "is_configured": true, 00:15:36.697 "data_offset": 2048, 00:15:36.697 "data_size": 63488 00:15:36.697 } 00:15:36.697 ] 00:15:36.697 }' 00:15:36.697 15:24:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.697 15:24:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.957 [2024-11-10 15:24:43.252174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:36.957 [2024-11-10 15:24:43.252400] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:36.957 [2024-11-10 15:24:43.272932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:36.957 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.958 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 [2024-11-10 15:24:43.332973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.218 [2024-11-10 15:24:43.413531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:37.218 [2024-11-10 15:24:43.413667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.218 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.219 BaseBdev2 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.219 [ 00:15:37.219 { 00:15:37.219 "name": "BaseBdev2", 00:15:37.219 "aliases": [ 00:15:37.219 "fa055fb5-b6e2-4848-8a12-e4c462ed7545" 00:15:37.219 ], 00:15:37.219 "product_name": "Malloc disk", 00:15:37.219 "block_size": 512, 00:15:37.219 "num_blocks": 65536, 00:15:37.219 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:37.219 "assigned_rate_limits": { 00:15:37.219 "rw_ios_per_sec": 0, 00:15:37.219 "rw_mbytes_per_sec": 0, 00:15:37.219 "r_mbytes_per_sec": 0, 00:15:37.219 "w_mbytes_per_sec": 0 00:15:37.219 }, 00:15:37.219 "claimed": false, 00:15:37.219 "zoned": false, 00:15:37.219 "supported_io_types": { 00:15:37.219 "read": true, 00:15:37.219 "write": true, 00:15:37.219 "unmap": true, 00:15:37.219 "flush": true, 00:15:37.219 "reset": true, 00:15:37.219 "nvme_admin": false, 00:15:37.219 "nvme_io": false, 00:15:37.219 "nvme_io_md": false, 00:15:37.219 "write_zeroes": true, 00:15:37.219 "zcopy": true, 00:15:37.219 "get_zone_info": false, 00:15:37.219 "zone_management": false, 00:15:37.219 "zone_append": false, 00:15:37.219 "compare": false, 00:15:37.219 "compare_and_write": false, 00:15:37.219 "abort": true, 00:15:37.219 "seek_hole": false, 00:15:37.219 "seek_data": false, 00:15:37.219 "copy": true, 00:15:37.219 "nvme_iov_md": false 00:15:37.219 }, 00:15:37.219 "memory_domains": [ 00:15:37.219 { 00:15:37.219 "dma_device_id": "system", 00:15:37.219 "dma_device_type": 1 00:15:37.219 }, 00:15:37.219 { 00:15:37.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.219 "dma_device_type": 2 00:15:37.219 } 00:15:37.219 ], 00:15:37.219 "driver_specific": {} 00:15:37.219 } 00:15:37.219 ] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.219 BaseBdev3 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev3 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.219 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.480 [ 00:15:37.480 { 00:15:37.480 "name": "BaseBdev3", 00:15:37.480 "aliases": [ 00:15:37.480 "597d0afd-bd19-4e97-95e1-819c6e49e96e" 00:15:37.480 ], 00:15:37.480 "product_name": "Malloc disk", 00:15:37.480 "block_size": 512, 00:15:37.480 "num_blocks": 65536, 00:15:37.480 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:37.480 "assigned_rate_limits": { 00:15:37.480 "rw_ios_per_sec": 0, 00:15:37.480 "rw_mbytes_per_sec": 0, 00:15:37.480 "r_mbytes_per_sec": 0, 00:15:37.480 "w_mbytes_per_sec": 0 00:15:37.480 }, 00:15:37.480 "claimed": false, 00:15:37.480 "zoned": false, 00:15:37.480 "supported_io_types": { 00:15:37.480 "read": true, 00:15:37.480 "write": true, 00:15:37.480 "unmap": true, 00:15:37.480 "flush": true, 00:15:37.480 "reset": true, 00:15:37.480 "nvme_admin": false, 00:15:37.480 "nvme_io": false, 00:15:37.480 "nvme_io_md": false, 00:15:37.480 "write_zeroes": true, 00:15:37.480 "zcopy": true, 00:15:37.480 "get_zone_info": false, 00:15:37.480 "zone_management": false, 00:15:37.480 "zone_append": false, 00:15:37.480 "compare": false, 00:15:37.480 "compare_and_write": false, 00:15:37.480 "abort": true, 00:15:37.480 "seek_hole": false, 00:15:37.480 "seek_data": false, 00:15:37.480 "copy": true, 00:15:37.480 "nvme_iov_md": false 00:15:37.480 }, 00:15:37.480 "memory_domains": [ 00:15:37.480 { 00:15:37.480 "dma_device_id": "system", 00:15:37.480 "dma_device_type": 1 00:15:37.480 }, 00:15:37.480 { 00:15:37.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.480 "dma_device_type": 2 00:15:37.480 } 00:15:37.480 ], 00:15:37.480 "driver_specific": {} 00:15:37.480 } 00:15:37.480 ] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.480 BaseBdev4 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev4 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.480 [ 00:15:37.480 { 00:15:37.480 "name": "BaseBdev4", 00:15:37.480 "aliases": [ 00:15:37.480 "53335d0f-9df5-4978-b894-078300fafe25" 00:15:37.480 ], 00:15:37.480 "product_name": "Malloc disk", 00:15:37.480 "block_size": 512, 00:15:37.480 "num_blocks": 65536, 00:15:37.480 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:37.480 "assigned_rate_limits": { 00:15:37.480 "rw_ios_per_sec": 0, 00:15:37.480 "rw_mbytes_per_sec": 0, 00:15:37.480 "r_mbytes_per_sec": 0, 00:15:37.480 "w_mbytes_per_sec": 0 00:15:37.480 }, 00:15:37.480 "claimed": false, 00:15:37.480 "zoned": false, 00:15:37.480 "supported_io_types": { 00:15:37.480 "read": true, 00:15:37.480 "write": true, 00:15:37.480 "unmap": true, 00:15:37.480 "flush": true, 00:15:37.480 "reset": true, 00:15:37.480 "nvme_admin": false, 00:15:37.480 "nvme_io": false, 00:15:37.480 "nvme_io_md": false, 00:15:37.480 "write_zeroes": true, 00:15:37.480 "zcopy": true, 00:15:37.480 "get_zone_info": false, 00:15:37.480 "zone_management": false, 00:15:37.480 "zone_append": false, 00:15:37.480 "compare": false, 00:15:37.480 "compare_and_write": false, 00:15:37.480 "abort": true, 00:15:37.480 "seek_hole": false, 00:15:37.480 "seek_data": false, 00:15:37.480 "copy": true, 00:15:37.480 "nvme_iov_md": false 00:15:37.480 }, 00:15:37.480 "memory_domains": [ 00:15:37.480 { 00:15:37.480 "dma_device_id": "system", 00:15:37.480 "dma_device_type": 1 00:15:37.480 }, 00:15:37.480 { 00:15:37.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.480 "dma_device_type": 2 00:15:37.480 } 00:15:37.480 ], 00:15:37.480 "driver_specific": {} 00:15:37.480 } 00:15:37.480 ] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.480 [2024-11-10 15:24:43.666537] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.480 [2024-11-10 15:24:43.666656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.480 [2024-11-10 15:24:43.666713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.480 [2024-11-10 15:24:43.668891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.480 [2024-11-10 15:24:43.668983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.480 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.481 "name": "Existed_Raid", 00:15:37.481 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:37.481 "strip_size_kb": 64, 00:15:37.481 "state": "configuring", 00:15:37.481 "raid_level": "raid5f", 00:15:37.481 "superblock": true, 00:15:37.481 "num_base_bdevs": 4, 00:15:37.481 "num_base_bdevs_discovered": 3, 00:15:37.481 "num_base_bdevs_operational": 4, 00:15:37.481 "base_bdevs_list": [ 00:15:37.481 { 00:15:37.481 "name": "BaseBdev1", 00:15:37.481 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.481 "is_configured": false, 00:15:37.481 "data_offset": 0, 00:15:37.481 "data_size": 0 00:15:37.481 }, 00:15:37.481 { 00:15:37.481 "name": "BaseBdev2", 00:15:37.481 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:37.481 "is_configured": true, 00:15:37.481 "data_offset": 2048, 00:15:37.481 "data_size": 63488 00:15:37.481 }, 00:15:37.481 { 00:15:37.481 "name": "BaseBdev3", 00:15:37.481 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:37.481 "is_configured": true, 00:15:37.481 "data_offset": 2048, 00:15:37.481 "data_size": 63488 00:15:37.481 }, 00:15:37.481 { 00:15:37.481 "name": "BaseBdev4", 00:15:37.481 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:37.481 "is_configured": true, 00:15:37.481 "data_offset": 2048, 00:15:37.481 "data_size": 63488 00:15:37.481 } 00:15:37.481 ] 00:15:37.481 }' 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.481 15:24:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.051 [2024-11-10 15:24:44.122620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.051 "name": "Existed_Raid", 00:15:38.051 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:38.051 "strip_size_kb": 64, 00:15:38.051 "state": "configuring", 00:15:38.051 "raid_level": "raid5f", 00:15:38.051 "superblock": true, 00:15:38.051 "num_base_bdevs": 4, 00:15:38.051 "num_base_bdevs_discovered": 2, 00:15:38.051 "num_base_bdevs_operational": 4, 00:15:38.051 "base_bdevs_list": [ 00:15:38.051 { 00:15:38.051 "name": "BaseBdev1", 00:15:38.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.051 "is_configured": false, 00:15:38.051 "data_offset": 0, 00:15:38.051 "data_size": 0 00:15:38.051 }, 00:15:38.051 { 00:15:38.051 "name": null, 00:15:38.051 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:38.051 "is_configured": false, 00:15:38.051 "data_offset": 0, 00:15:38.051 "data_size": 63488 00:15:38.051 }, 00:15:38.051 { 00:15:38.051 "name": "BaseBdev3", 00:15:38.051 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:38.051 "is_configured": true, 00:15:38.051 "data_offset": 2048, 00:15:38.051 "data_size": 63488 00:15:38.051 }, 00:15:38.051 { 00:15:38.051 "name": "BaseBdev4", 00:15:38.051 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:38.051 "is_configured": true, 00:15:38.051 "data_offset": 2048, 00:15:38.051 "data_size": 63488 00:15:38.051 } 00:15:38.051 ] 00:15:38.051 }' 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.051 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.311 [2024-11-10 15:24:44.660073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:38.311 BaseBdev1 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.311 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.572 [ 00:15:38.572 { 00:15:38.572 "name": "BaseBdev1", 00:15:38.572 "aliases": [ 00:15:38.572 "ce8cef61-8e93-4333-8b43-6f6737ef5b41" 00:15:38.572 ], 00:15:38.572 "product_name": "Malloc disk", 00:15:38.572 "block_size": 512, 00:15:38.572 "num_blocks": 65536, 00:15:38.572 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:38.572 "assigned_rate_limits": { 00:15:38.572 "rw_ios_per_sec": 0, 00:15:38.572 "rw_mbytes_per_sec": 0, 00:15:38.572 "r_mbytes_per_sec": 0, 00:15:38.572 "w_mbytes_per_sec": 0 00:15:38.572 }, 00:15:38.572 "claimed": true, 00:15:38.572 "claim_type": "exclusive_write", 00:15:38.572 "zoned": false, 00:15:38.572 "supported_io_types": { 00:15:38.572 "read": true, 00:15:38.572 "write": true, 00:15:38.572 "unmap": true, 00:15:38.572 "flush": true, 00:15:38.572 "reset": true, 00:15:38.572 "nvme_admin": false, 00:15:38.572 "nvme_io": false, 00:15:38.572 "nvme_io_md": false, 00:15:38.572 "write_zeroes": true, 00:15:38.572 "zcopy": true, 00:15:38.572 "get_zone_info": false, 00:15:38.572 "zone_management": false, 00:15:38.572 "zone_append": false, 00:15:38.572 "compare": false, 00:15:38.572 "compare_and_write": false, 00:15:38.572 "abort": true, 00:15:38.572 "seek_hole": false, 00:15:38.572 "seek_data": false, 00:15:38.572 "copy": true, 00:15:38.572 "nvme_iov_md": false 00:15:38.572 }, 00:15:38.572 "memory_domains": [ 00:15:38.572 { 00:15:38.572 "dma_device_id": "system", 00:15:38.572 "dma_device_type": 1 00:15:38.572 }, 00:15:38.572 { 00:15:38.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.572 "dma_device_type": 2 00:15:38.572 } 00:15:38.572 ], 00:15:38.572 "driver_specific": {} 00:15:38.572 } 00:15:38.572 ] 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.572 "name": "Existed_Raid", 00:15:38.572 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:38.572 "strip_size_kb": 64, 00:15:38.572 "state": "configuring", 00:15:38.572 "raid_level": "raid5f", 00:15:38.572 "superblock": true, 00:15:38.572 "num_base_bdevs": 4, 00:15:38.572 "num_base_bdevs_discovered": 3, 00:15:38.572 "num_base_bdevs_operational": 4, 00:15:38.572 "base_bdevs_list": [ 00:15:38.572 { 00:15:38.572 "name": "BaseBdev1", 00:15:38.572 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:38.572 "is_configured": true, 00:15:38.572 "data_offset": 2048, 00:15:38.572 "data_size": 63488 00:15:38.572 }, 00:15:38.572 { 00:15:38.572 "name": null, 00:15:38.572 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:38.572 "is_configured": false, 00:15:38.572 "data_offset": 0, 00:15:38.572 "data_size": 63488 00:15:38.572 }, 00:15:38.572 { 00:15:38.572 "name": "BaseBdev3", 00:15:38.572 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:38.572 "is_configured": true, 00:15:38.572 "data_offset": 2048, 00:15:38.572 "data_size": 63488 00:15:38.572 }, 00:15:38.572 { 00:15:38.572 "name": "BaseBdev4", 00:15:38.572 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:38.572 "is_configured": true, 00:15:38.572 "data_offset": 2048, 00:15:38.572 "data_size": 63488 00:15:38.572 } 00:15:38.572 ] 00:15:38.572 }' 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.572 15:24:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.833 [2024-11-10 15:24:45.172234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.833 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.093 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.093 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.093 "name": "Existed_Raid", 00:15:39.093 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:39.093 "strip_size_kb": 64, 00:15:39.093 "state": "configuring", 00:15:39.093 "raid_level": "raid5f", 00:15:39.093 "superblock": true, 00:15:39.093 "num_base_bdevs": 4, 00:15:39.093 "num_base_bdevs_discovered": 2, 00:15:39.093 "num_base_bdevs_operational": 4, 00:15:39.093 "base_bdevs_list": [ 00:15:39.093 { 00:15:39.093 "name": "BaseBdev1", 00:15:39.093 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:39.093 "is_configured": true, 00:15:39.093 "data_offset": 2048, 00:15:39.093 "data_size": 63488 00:15:39.093 }, 00:15:39.093 { 00:15:39.093 "name": null, 00:15:39.093 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:39.093 "is_configured": false, 00:15:39.093 "data_offset": 0, 00:15:39.093 "data_size": 63488 00:15:39.093 }, 00:15:39.093 { 00:15:39.093 "name": null, 00:15:39.093 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:39.093 "is_configured": false, 00:15:39.093 "data_offset": 0, 00:15:39.093 "data_size": 63488 00:15:39.093 }, 00:15:39.093 { 00:15:39.093 "name": "BaseBdev4", 00:15:39.093 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:39.093 "is_configured": true, 00:15:39.093 "data_offset": 2048, 00:15:39.093 "data_size": 63488 00:15:39.093 } 00:15:39.093 ] 00:15:39.093 }' 00:15:39.093 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.093 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 [2024-11-10 15:24:45.624405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.353 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.353 "name": "Existed_Raid", 00:15:39.353 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:39.353 "strip_size_kb": 64, 00:15:39.353 "state": "configuring", 00:15:39.353 "raid_level": "raid5f", 00:15:39.353 "superblock": true, 00:15:39.353 "num_base_bdevs": 4, 00:15:39.353 "num_base_bdevs_discovered": 3, 00:15:39.353 "num_base_bdevs_operational": 4, 00:15:39.353 "base_bdevs_list": [ 00:15:39.353 { 00:15:39.353 "name": "BaseBdev1", 00:15:39.353 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:39.353 "is_configured": true, 00:15:39.353 "data_offset": 2048, 00:15:39.353 "data_size": 63488 00:15:39.353 }, 00:15:39.353 { 00:15:39.353 "name": null, 00:15:39.353 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:39.353 "is_configured": false, 00:15:39.353 "data_offset": 0, 00:15:39.353 "data_size": 63488 00:15:39.353 }, 00:15:39.353 { 00:15:39.353 "name": "BaseBdev3", 00:15:39.353 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:39.353 "is_configured": true, 00:15:39.353 "data_offset": 2048, 00:15:39.353 "data_size": 63488 00:15:39.353 }, 00:15:39.353 { 00:15:39.353 "name": "BaseBdev4", 00:15:39.353 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:39.353 "is_configured": true, 00:15:39.353 "data_offset": 2048, 00:15:39.353 "data_size": 63488 00:15:39.354 } 00:15:39.354 ] 00:15:39.354 }' 00:15:39.354 15:24:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.354 15:24:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.923 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.923 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.923 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.924 [2024-11-10 15:24:46.160562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.924 "name": "Existed_Raid", 00:15:39.924 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:39.924 "strip_size_kb": 64, 00:15:39.924 "state": "configuring", 00:15:39.924 "raid_level": "raid5f", 00:15:39.924 "superblock": true, 00:15:39.924 "num_base_bdevs": 4, 00:15:39.924 "num_base_bdevs_discovered": 2, 00:15:39.924 "num_base_bdevs_operational": 4, 00:15:39.924 "base_bdevs_list": [ 00:15:39.924 { 00:15:39.924 "name": null, 00:15:39.924 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:39.924 "is_configured": false, 00:15:39.924 "data_offset": 0, 00:15:39.924 "data_size": 63488 00:15:39.924 }, 00:15:39.924 { 00:15:39.924 "name": null, 00:15:39.924 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:39.924 "is_configured": false, 00:15:39.924 "data_offset": 0, 00:15:39.924 "data_size": 63488 00:15:39.924 }, 00:15:39.924 { 00:15:39.924 "name": "BaseBdev3", 00:15:39.924 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:39.924 "is_configured": true, 00:15:39.924 "data_offset": 2048, 00:15:39.924 "data_size": 63488 00:15:39.924 }, 00:15:39.924 { 00:15:39.924 "name": "BaseBdev4", 00:15:39.924 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:39.924 "is_configured": true, 00:15:39.924 "data_offset": 2048, 00:15:39.924 "data_size": 63488 00:15:39.924 } 00:15:39.924 ] 00:15:39.924 }' 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.924 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.494 [2024-11-10 15:24:46.676788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.494 "name": "Existed_Raid", 00:15:40.494 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:40.494 "strip_size_kb": 64, 00:15:40.494 "state": "configuring", 00:15:40.494 "raid_level": "raid5f", 00:15:40.494 "superblock": true, 00:15:40.494 "num_base_bdevs": 4, 00:15:40.494 "num_base_bdevs_discovered": 3, 00:15:40.494 "num_base_bdevs_operational": 4, 00:15:40.494 "base_bdevs_list": [ 00:15:40.494 { 00:15:40.494 "name": null, 00:15:40.494 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:40.494 "is_configured": false, 00:15:40.494 "data_offset": 0, 00:15:40.494 "data_size": 63488 00:15:40.494 }, 00:15:40.494 { 00:15:40.494 "name": "BaseBdev2", 00:15:40.494 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:40.494 "is_configured": true, 00:15:40.494 "data_offset": 2048, 00:15:40.494 "data_size": 63488 00:15:40.494 }, 00:15:40.494 { 00:15:40.494 "name": "BaseBdev3", 00:15:40.494 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:40.494 "is_configured": true, 00:15:40.494 "data_offset": 2048, 00:15:40.494 "data_size": 63488 00:15:40.494 }, 00:15:40.494 { 00:15:40.494 "name": "BaseBdev4", 00:15:40.494 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:40.494 "is_configured": true, 00:15:40.494 "data_offset": 2048, 00:15:40.494 "data_size": 63488 00:15:40.494 } 00:15:40.494 ] 00:15:40.494 }' 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.494 15:24:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.064 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce8cef61-8e93-4333-8b43-6f6737ef5b41 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 [2024-11-10 15:24:47.256784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:41.065 [2024-11-10 15:24:47.257082] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:41.065 [2024-11-10 15:24:47.257109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:41.065 [2024-11-10 15:24:47.257408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000067d0 00:15:41.065 NewBaseBdev 00:15:41.065 [2024-11-10 15:24:47.257920] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:41.065 [2024-11-10 15:24:47.257933] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:41.065 [2024-11-10 15:24:47.258059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local bdev_name=NewBaseBdev 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local i 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 [ 00:15:41.065 { 00:15:41.065 "name": "NewBaseBdev", 00:15:41.065 "aliases": [ 00:15:41.065 "ce8cef61-8e93-4333-8b43-6f6737ef5b41" 00:15:41.065 ], 00:15:41.065 "product_name": "Malloc disk", 00:15:41.065 "block_size": 512, 00:15:41.065 "num_blocks": 65536, 00:15:41.065 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:41.065 "assigned_rate_limits": { 00:15:41.065 "rw_ios_per_sec": 0, 00:15:41.065 "rw_mbytes_per_sec": 0, 00:15:41.065 "r_mbytes_per_sec": 0, 00:15:41.065 "w_mbytes_per_sec": 0 00:15:41.065 }, 00:15:41.065 "claimed": true, 00:15:41.065 "claim_type": "exclusive_write", 00:15:41.065 "zoned": false, 00:15:41.065 "supported_io_types": { 00:15:41.065 "read": true, 00:15:41.065 "write": true, 00:15:41.065 "unmap": true, 00:15:41.065 "flush": true, 00:15:41.065 "reset": true, 00:15:41.065 "nvme_admin": false, 00:15:41.065 "nvme_io": false, 00:15:41.065 "nvme_io_md": false, 00:15:41.065 "write_zeroes": true, 00:15:41.065 "zcopy": true, 00:15:41.065 "get_zone_info": false, 00:15:41.065 "zone_management": false, 00:15:41.065 "zone_append": false, 00:15:41.065 "compare": false, 00:15:41.065 "compare_and_write": false, 00:15:41.065 "abort": true, 00:15:41.065 "seek_hole": false, 00:15:41.065 "seek_data": false, 00:15:41.065 "copy": true, 00:15:41.065 "nvme_iov_md": false 00:15:41.065 }, 00:15:41.065 "memory_domains": [ 00:15:41.065 { 00:15:41.065 "dma_device_id": "system", 00:15:41.065 "dma_device_type": 1 00:15:41.065 }, 00:15:41.065 { 00:15:41.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.065 "dma_device_type": 2 00:15:41.065 } 00:15:41.065 ], 00:15:41.065 "driver_specific": {} 00:15:41.065 } 00:15:41.065 ] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@909 -- # return 0 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.065 "name": "Existed_Raid", 00:15:41.065 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:41.065 "strip_size_kb": 64, 00:15:41.065 "state": "online", 00:15:41.065 "raid_level": "raid5f", 00:15:41.065 "superblock": true, 00:15:41.065 "num_base_bdevs": 4, 00:15:41.065 "num_base_bdevs_discovered": 4, 00:15:41.065 "num_base_bdevs_operational": 4, 00:15:41.065 "base_bdevs_list": [ 00:15:41.065 { 00:15:41.065 "name": "NewBaseBdev", 00:15:41.065 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 }, 00:15:41.065 { 00:15:41.065 "name": "BaseBdev2", 00:15:41.065 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 }, 00:15:41.065 { 00:15:41.065 "name": "BaseBdev3", 00:15:41.065 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 }, 00:15:41.065 { 00:15:41.065 "name": "BaseBdev4", 00:15:41.065 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:41.065 "is_configured": true, 00:15:41.065 "data_offset": 2048, 00:15:41.065 "data_size": 63488 00:15:41.065 } 00:15:41.065 ] 00:15:41.065 }' 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.065 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:41.635 [2024-11-10 15:24:47.749126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.635 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:41.635 "name": "Existed_Raid", 00:15:41.635 "aliases": [ 00:15:41.635 "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc" 00:15:41.635 ], 00:15:41.635 "product_name": "Raid Volume", 00:15:41.636 "block_size": 512, 00:15:41.636 "num_blocks": 190464, 00:15:41.636 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:41.636 "assigned_rate_limits": { 00:15:41.636 "rw_ios_per_sec": 0, 00:15:41.636 "rw_mbytes_per_sec": 0, 00:15:41.636 "r_mbytes_per_sec": 0, 00:15:41.636 "w_mbytes_per_sec": 0 00:15:41.636 }, 00:15:41.636 "claimed": false, 00:15:41.636 "zoned": false, 00:15:41.636 "supported_io_types": { 00:15:41.636 "read": true, 00:15:41.636 "write": true, 00:15:41.636 "unmap": false, 00:15:41.636 "flush": false, 00:15:41.636 "reset": true, 00:15:41.636 "nvme_admin": false, 00:15:41.636 "nvme_io": false, 00:15:41.636 "nvme_io_md": false, 00:15:41.636 "write_zeroes": true, 00:15:41.636 "zcopy": false, 00:15:41.636 "get_zone_info": false, 00:15:41.636 "zone_management": false, 00:15:41.636 "zone_append": false, 00:15:41.636 "compare": false, 00:15:41.636 "compare_and_write": false, 00:15:41.636 "abort": false, 00:15:41.636 "seek_hole": false, 00:15:41.636 "seek_data": false, 00:15:41.636 "copy": false, 00:15:41.636 "nvme_iov_md": false 00:15:41.636 }, 00:15:41.636 "driver_specific": { 00:15:41.636 "raid": { 00:15:41.636 "uuid": "d3fc0f2b-0dbc-412f-a005-9ac9d9dd6acc", 00:15:41.636 "strip_size_kb": 64, 00:15:41.636 "state": "online", 00:15:41.636 "raid_level": "raid5f", 00:15:41.636 "superblock": true, 00:15:41.636 "num_base_bdevs": 4, 00:15:41.636 "num_base_bdevs_discovered": 4, 00:15:41.636 "num_base_bdevs_operational": 4, 00:15:41.636 "base_bdevs_list": [ 00:15:41.636 { 00:15:41.636 "name": "NewBaseBdev", 00:15:41.636 "uuid": "ce8cef61-8e93-4333-8b43-6f6737ef5b41", 00:15:41.636 "is_configured": true, 00:15:41.636 "data_offset": 2048, 00:15:41.636 "data_size": 63488 00:15:41.636 }, 00:15:41.636 { 00:15:41.636 "name": "BaseBdev2", 00:15:41.636 "uuid": "fa055fb5-b6e2-4848-8a12-e4c462ed7545", 00:15:41.636 "is_configured": true, 00:15:41.636 "data_offset": 2048, 00:15:41.636 "data_size": 63488 00:15:41.636 }, 00:15:41.636 { 00:15:41.636 "name": "BaseBdev3", 00:15:41.636 "uuid": "597d0afd-bd19-4e97-95e1-819c6e49e96e", 00:15:41.636 "is_configured": true, 00:15:41.636 "data_offset": 2048, 00:15:41.636 "data_size": 63488 00:15:41.636 }, 00:15:41.636 { 00:15:41.636 "name": "BaseBdev4", 00:15:41.636 "uuid": "53335d0f-9df5-4978-b894-078300fafe25", 00:15:41.636 "is_configured": true, 00:15:41.636 "data_offset": 2048, 00:15:41.636 "data_size": 63488 00:15:41.636 } 00:15:41.636 ] 00:15:41.636 } 00:15:41.636 } 00:15:41.636 }' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:41.636 BaseBdev2 00:15:41.636 BaseBdev3 00:15:41.636 BaseBdev4' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.636 15:24:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.897 15:24:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.897 [2024-11-10 15:24:48.085036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:41.897 [2024-11-10 15:24:48.085061] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:41.897 [2024-11-10 15:24:48.085128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.897 [2024-11-10 15:24:48.085437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.897 [2024-11-10 15:24:48.085463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 95297 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@952 -- # '[' -z 95297 ']' 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # kill -0 95297 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # uname 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95297 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95297' 00:15:41.897 killing process with pid 95297 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@971 -- # kill 95297 00:15:41.897 [2024-11-10 15:24:48.135837] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.897 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@976 -- # wait 95297 00:15:41.897 [2024-11-10 15:24:48.213729] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:42.468 15:24:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:42.468 00:15:42.468 real 0m10.013s 00:15:42.468 user 0m16.752s 00:15:42.468 sys 0m2.325s 00:15:42.468 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:42.468 15:24:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.468 ************************************ 00:15:42.468 END TEST raid5f_state_function_test_sb 00:15:42.468 ************************************ 00:15:42.468 15:24:48 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:42.468 15:24:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:15:42.468 15:24:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:42.468 15:24:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:42.468 ************************************ 00:15:42.468 START TEST raid5f_superblock_test 00:15:42.468 ************************************ 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1127 -- # raid_superblock_test raid5f 4 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=95951 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 95951 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@833 -- # '[' -z 95951 ']' 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.468 15:24:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.468 [2024-11-10 15:24:48.726555] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:15:42.468 [2024-11-10 15:24:48.726696] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95951 ] 00:15:42.728 [2024-11-10 15:24:48.865331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:42.729 [2024-11-10 15:24:48.904051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.729 [2024-11-10 15:24:48.944700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.729 [2024-11-10 15:24:49.023621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.729 [2024-11-10 15:24:49.023662] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@866 -- # return 0 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 malloc1 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 [2024-11-10 15:24:49.552581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:43.300 [2024-11-10 15:24:49.552732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.300 [2024-11-10 15:24:49.552781] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:43.300 [2024-11-10 15:24:49.552835] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.300 [2024-11-10 15:24:49.555330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.300 [2024-11-10 15:24:49.555412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:43.300 pt1 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 malloc2 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 [2024-11-10 15:24:49.591570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:43.300 [2024-11-10 15:24:49.591622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.300 [2024-11-10 15:24:49.591642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:43.300 [2024-11-10 15:24:49.591650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.300 [2024-11-10 15:24:49.594104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.300 [2024-11-10 15:24:49.594137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:43.300 pt2 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 malloc3 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.300 [2024-11-10 15:24:49.626507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:43.300 [2024-11-10 15:24:49.626603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.300 [2024-11-10 15:24:49.626657] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:43.300 [2024-11-10 15:24:49.626684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.300 [2024-11-10 15:24:49.629139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.300 [2024-11-10 15:24:49.629209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:43.300 pt3 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.300 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.301 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.561 malloc4 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.561 [2024-11-10 15:24:49.673711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:43.561 [2024-11-10 15:24:49.673806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:43.561 [2024-11-10 15:24:49.673860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:43.561 [2024-11-10 15:24:49.673886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:43.561 [2024-11-10 15:24:49.676289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:43.561 [2024-11-10 15:24:49.676359] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:43.561 pt4 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.561 [2024-11-10 15:24:49.685773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:43.561 [2024-11-10 15:24:49.687952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:43.561 [2024-11-10 15:24:49.688034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:43.561 [2024-11-10 15:24:49.688092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:43.561 [2024-11-10 15:24:49.688271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:43.561 [2024-11-10 15:24:49.688283] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:43.561 [2024-11-10 15:24:49.688545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:43.561 [2024-11-10 15:24:49.689072] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:43.561 [2024-11-10 15:24:49.689088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:43.561 [2024-11-10 15:24:49.689213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.561 "name": "raid_bdev1", 00:15:43.561 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:43.561 "strip_size_kb": 64, 00:15:43.561 "state": "online", 00:15:43.561 "raid_level": "raid5f", 00:15:43.561 "superblock": true, 00:15:43.561 "num_base_bdevs": 4, 00:15:43.561 "num_base_bdevs_discovered": 4, 00:15:43.561 "num_base_bdevs_operational": 4, 00:15:43.561 "base_bdevs_list": [ 00:15:43.561 { 00:15:43.561 "name": "pt1", 00:15:43.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:43.561 "is_configured": true, 00:15:43.561 "data_offset": 2048, 00:15:43.561 "data_size": 63488 00:15:43.561 }, 00:15:43.561 { 00:15:43.561 "name": "pt2", 00:15:43.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:43.561 "is_configured": true, 00:15:43.561 "data_offset": 2048, 00:15:43.561 "data_size": 63488 00:15:43.561 }, 00:15:43.561 { 00:15:43.561 "name": "pt3", 00:15:43.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:43.561 "is_configured": true, 00:15:43.561 "data_offset": 2048, 00:15:43.561 "data_size": 63488 00:15:43.561 }, 00:15:43.561 { 00:15:43.561 "name": "pt4", 00:15:43.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:43.561 "is_configured": true, 00:15:43.561 "data_offset": 2048, 00:15:43.561 "data_size": 63488 00:15:43.561 } 00:15:43.561 ] 00:15:43.561 }' 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.561 15:24:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.821 [2024-11-10 15:24:50.148416] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:43.821 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.082 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.082 "name": "raid_bdev1", 00:15:44.082 "aliases": [ 00:15:44.082 "82efdf7b-9280-4daa-96bc-9c33b69aa940" 00:15:44.082 ], 00:15:44.082 "product_name": "Raid Volume", 00:15:44.082 "block_size": 512, 00:15:44.082 "num_blocks": 190464, 00:15:44.082 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:44.082 "assigned_rate_limits": { 00:15:44.082 "rw_ios_per_sec": 0, 00:15:44.082 "rw_mbytes_per_sec": 0, 00:15:44.082 "r_mbytes_per_sec": 0, 00:15:44.082 "w_mbytes_per_sec": 0 00:15:44.082 }, 00:15:44.082 "claimed": false, 00:15:44.082 "zoned": false, 00:15:44.082 "supported_io_types": { 00:15:44.082 "read": true, 00:15:44.082 "write": true, 00:15:44.082 "unmap": false, 00:15:44.082 "flush": false, 00:15:44.082 "reset": true, 00:15:44.082 "nvme_admin": false, 00:15:44.082 "nvme_io": false, 00:15:44.082 "nvme_io_md": false, 00:15:44.082 "write_zeroes": true, 00:15:44.082 "zcopy": false, 00:15:44.082 "get_zone_info": false, 00:15:44.082 "zone_management": false, 00:15:44.082 "zone_append": false, 00:15:44.082 "compare": false, 00:15:44.082 "compare_and_write": false, 00:15:44.082 "abort": false, 00:15:44.082 "seek_hole": false, 00:15:44.082 "seek_data": false, 00:15:44.082 "copy": false, 00:15:44.082 "nvme_iov_md": false 00:15:44.082 }, 00:15:44.082 "driver_specific": { 00:15:44.082 "raid": { 00:15:44.082 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:44.082 "strip_size_kb": 64, 00:15:44.082 "state": "online", 00:15:44.082 "raid_level": "raid5f", 00:15:44.082 "superblock": true, 00:15:44.082 "num_base_bdevs": 4, 00:15:44.082 "num_base_bdevs_discovered": 4, 00:15:44.082 "num_base_bdevs_operational": 4, 00:15:44.082 "base_bdevs_list": [ 00:15:44.082 { 00:15:44.082 "name": "pt1", 00:15:44.082 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.082 "is_configured": true, 00:15:44.082 "data_offset": 2048, 00:15:44.082 "data_size": 63488 00:15:44.082 }, 00:15:44.082 { 00:15:44.082 "name": "pt2", 00:15:44.082 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.082 "is_configured": true, 00:15:44.082 "data_offset": 2048, 00:15:44.082 "data_size": 63488 00:15:44.082 }, 00:15:44.082 { 00:15:44.082 "name": "pt3", 00:15:44.082 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.082 "is_configured": true, 00:15:44.082 "data_offset": 2048, 00:15:44.082 "data_size": 63488 00:15:44.082 }, 00:15:44.082 { 00:15:44.082 "name": "pt4", 00:15:44.082 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.082 "is_configured": true, 00:15:44.082 "data_offset": 2048, 00:15:44.082 "data_size": 63488 00:15:44.082 } 00:15:44.082 ] 00:15:44.082 } 00:15:44.082 } 00:15:44.082 }' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:44.083 pt2 00:15:44.083 pt3 00:15:44.083 pt4' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.083 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:44.343 [2024-11-10 15:24:50.476469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=82efdf7b-9280-4daa-96bc-9c33b69aa940 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 82efdf7b-9280-4daa-96bc-9c33b69aa940 ']' 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.343 [2024-11-10 15:24:50.516307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.343 [2024-11-10 15:24:50.516368] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:44.343 [2024-11-10 15:24:50.516451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:44.343 [2024-11-10 15:24:50.516552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:44.343 [2024-11-10 15:24:50.516564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:44.343 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.344 [2024-11-10 15:24:50.688409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:44.344 [2024-11-10 15:24:50.690512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:44.344 [2024-11-10 15:24:50.690609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:44.344 [2024-11-10 15:24:50.690656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:44.344 [2024-11-10 15:24:50.690747] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:44.344 [2024-11-10 15:24:50.690856] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:44.344 [2024-11-10 15:24:50.690907] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:44.344 [2024-11-10 15:24:50.690968] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:44.344 [2024-11-10 15:24:50.691023] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:44.344 [2024-11-10 15:24:50.691060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:15:44.344 request: 00:15:44.344 { 00:15:44.344 "name": "raid_bdev1", 00:15:44.344 "raid_level": "raid5f", 00:15:44.344 "base_bdevs": [ 00:15:44.344 "malloc1", 00:15:44.344 "malloc2", 00:15:44.344 "malloc3", 00:15:44.344 "malloc4" 00:15:44.344 ], 00:15:44.344 "strip_size_kb": 64, 00:15:44.344 "superblock": false, 00:15:44.344 "method": "bdev_raid_create", 00:15:44.344 "req_id": 1 00:15:44.344 } 00:15:44.344 Got JSON-RPC error response 00:15:44.344 response: 00:15:44.344 { 00:15:44.344 "code": -17, 00:15:44.344 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:44.344 } 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:44.344 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 [2024-11-10 15:24:50.752381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:44.605 [2024-11-10 15:24:50.752432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.605 [2024-11-10 15:24:50.752465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:44.605 [2024-11-10 15:24:50.752476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.605 [2024-11-10 15:24:50.754909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.605 [2024-11-10 15:24:50.754945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:44.605 [2024-11-10 15:24:50.755017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:44.605 [2024-11-10 15:24:50.755069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:44.605 pt1 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.605 "name": "raid_bdev1", 00:15:44.605 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:44.605 "strip_size_kb": 64, 00:15:44.605 "state": "configuring", 00:15:44.605 "raid_level": "raid5f", 00:15:44.605 "superblock": true, 00:15:44.605 "num_base_bdevs": 4, 00:15:44.605 "num_base_bdevs_discovered": 1, 00:15:44.605 "num_base_bdevs_operational": 4, 00:15:44.605 "base_bdevs_list": [ 00:15:44.605 { 00:15:44.605 "name": "pt1", 00:15:44.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:44.605 "is_configured": true, 00:15:44.605 "data_offset": 2048, 00:15:44.605 "data_size": 63488 00:15:44.605 }, 00:15:44.605 { 00:15:44.605 "name": null, 00:15:44.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:44.605 "is_configured": false, 00:15:44.605 "data_offset": 2048, 00:15:44.605 "data_size": 63488 00:15:44.605 }, 00:15:44.605 { 00:15:44.605 "name": null, 00:15:44.605 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:44.605 "is_configured": false, 00:15:44.605 "data_offset": 2048, 00:15:44.605 "data_size": 63488 00:15:44.605 }, 00:15:44.605 { 00:15:44.605 "name": null, 00:15:44.605 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:44.605 "is_configured": false, 00:15:44.605 "data_offset": 2048, 00:15:44.605 "data_size": 63488 00:15:44.605 } 00:15:44.605 ] 00:15:44.605 }' 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.605 15:24:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.866 [2024-11-10 15:24:51.192489] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:44.866 [2024-11-10 15:24:51.192599] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.866 [2024-11-10 15:24:51.192632] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:44.866 [2024-11-10 15:24:51.192660] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.866 [2024-11-10 15:24:51.193048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.866 [2024-11-10 15:24:51.193107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:44.866 [2024-11-10 15:24:51.193188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:44.866 [2024-11-10 15:24:51.193238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:44.866 pt2 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.866 [2024-11-10 15:24:51.204495] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.866 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.126 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.126 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.126 "name": "raid_bdev1", 00:15:45.126 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:45.126 "strip_size_kb": 64, 00:15:45.126 "state": "configuring", 00:15:45.126 "raid_level": "raid5f", 00:15:45.126 "superblock": true, 00:15:45.126 "num_base_bdevs": 4, 00:15:45.126 "num_base_bdevs_discovered": 1, 00:15:45.126 "num_base_bdevs_operational": 4, 00:15:45.126 "base_bdevs_list": [ 00:15:45.126 { 00:15:45.126 "name": "pt1", 00:15:45.126 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.126 "is_configured": true, 00:15:45.126 "data_offset": 2048, 00:15:45.126 "data_size": 63488 00:15:45.126 }, 00:15:45.126 { 00:15:45.126 "name": null, 00:15:45.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.126 "is_configured": false, 00:15:45.126 "data_offset": 0, 00:15:45.126 "data_size": 63488 00:15:45.126 }, 00:15:45.126 { 00:15:45.126 "name": null, 00:15:45.126 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.126 "is_configured": false, 00:15:45.126 "data_offset": 2048, 00:15:45.126 "data_size": 63488 00:15:45.126 }, 00:15:45.126 { 00:15:45.126 "name": null, 00:15:45.126 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.126 "is_configured": false, 00:15:45.126 "data_offset": 2048, 00:15:45.126 "data_size": 63488 00:15:45.126 } 00:15:45.126 ] 00:15:45.126 }' 00:15:45.126 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.126 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.387 [2024-11-10 15:24:51.600621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:45.387 [2024-11-10 15:24:51.600670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.387 [2024-11-10 15:24:51.600687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:45.387 [2024-11-10 15:24:51.600695] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.387 [2024-11-10 15:24:51.601074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.387 [2024-11-10 15:24:51.601101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:45.387 [2024-11-10 15:24:51.601161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:45.387 [2024-11-10 15:24:51.601179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:45.387 pt2 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.387 [2024-11-10 15:24:51.612609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:45.387 [2024-11-10 15:24:51.612655] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.387 [2024-11-10 15:24:51.612671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:45.387 [2024-11-10 15:24:51.612679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.387 [2024-11-10 15:24:51.613001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.387 [2024-11-10 15:24:51.613039] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:45.387 [2024-11-10 15:24:51.613093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:45.387 [2024-11-10 15:24:51.613110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:45.387 pt3 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.387 [2024-11-10 15:24:51.624617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:45.387 [2024-11-10 15:24:51.624662] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:45.387 [2024-11-10 15:24:51.624679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:45.387 [2024-11-10 15:24:51.624686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:45.387 [2024-11-10 15:24:51.625004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:45.387 [2024-11-10 15:24:51.625036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:45.387 [2024-11-10 15:24:51.625088] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:45.387 [2024-11-10 15:24:51.625104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:45.387 [2024-11-10 15:24:51.625230] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:45.387 [2024-11-10 15:24:51.625238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:45.387 [2024-11-10 15:24:51.625485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:45.387 [2024-11-10 15:24:51.625983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:45.387 [2024-11-10 15:24:51.626005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:45.387 [2024-11-10 15:24:51.626118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.387 pt4 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.387 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.388 "name": "raid_bdev1", 00:15:45.388 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:45.388 "strip_size_kb": 64, 00:15:45.388 "state": "online", 00:15:45.388 "raid_level": "raid5f", 00:15:45.388 "superblock": true, 00:15:45.388 "num_base_bdevs": 4, 00:15:45.388 "num_base_bdevs_discovered": 4, 00:15:45.388 "num_base_bdevs_operational": 4, 00:15:45.388 "base_bdevs_list": [ 00:15:45.388 { 00:15:45.388 "name": "pt1", 00:15:45.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.388 "is_configured": true, 00:15:45.388 "data_offset": 2048, 00:15:45.388 "data_size": 63488 00:15:45.388 }, 00:15:45.388 { 00:15:45.388 "name": "pt2", 00:15:45.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.388 "is_configured": true, 00:15:45.388 "data_offset": 2048, 00:15:45.388 "data_size": 63488 00:15:45.388 }, 00:15:45.388 { 00:15:45.388 "name": "pt3", 00:15:45.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.388 "is_configured": true, 00:15:45.388 "data_offset": 2048, 00:15:45.388 "data_size": 63488 00:15:45.388 }, 00:15:45.388 { 00:15:45.388 "name": "pt4", 00:15:45.388 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.388 "is_configured": true, 00:15:45.388 "data_offset": 2048, 00:15:45.388 "data_size": 63488 00:15:45.388 } 00:15:45.388 ] 00:15:45.388 }' 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.388 15:24:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.958 [2024-11-10 15:24:52.105112] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.958 "name": "raid_bdev1", 00:15:45.958 "aliases": [ 00:15:45.958 "82efdf7b-9280-4daa-96bc-9c33b69aa940" 00:15:45.958 ], 00:15:45.958 "product_name": "Raid Volume", 00:15:45.958 "block_size": 512, 00:15:45.958 "num_blocks": 190464, 00:15:45.958 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:45.958 "assigned_rate_limits": { 00:15:45.958 "rw_ios_per_sec": 0, 00:15:45.958 "rw_mbytes_per_sec": 0, 00:15:45.958 "r_mbytes_per_sec": 0, 00:15:45.958 "w_mbytes_per_sec": 0 00:15:45.958 }, 00:15:45.958 "claimed": false, 00:15:45.958 "zoned": false, 00:15:45.958 "supported_io_types": { 00:15:45.958 "read": true, 00:15:45.958 "write": true, 00:15:45.958 "unmap": false, 00:15:45.958 "flush": false, 00:15:45.958 "reset": true, 00:15:45.958 "nvme_admin": false, 00:15:45.958 "nvme_io": false, 00:15:45.958 "nvme_io_md": false, 00:15:45.958 "write_zeroes": true, 00:15:45.958 "zcopy": false, 00:15:45.958 "get_zone_info": false, 00:15:45.958 "zone_management": false, 00:15:45.958 "zone_append": false, 00:15:45.958 "compare": false, 00:15:45.958 "compare_and_write": false, 00:15:45.958 "abort": false, 00:15:45.958 "seek_hole": false, 00:15:45.958 "seek_data": false, 00:15:45.958 "copy": false, 00:15:45.958 "nvme_iov_md": false 00:15:45.958 }, 00:15:45.958 "driver_specific": { 00:15:45.958 "raid": { 00:15:45.958 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:45.958 "strip_size_kb": 64, 00:15:45.958 "state": "online", 00:15:45.958 "raid_level": "raid5f", 00:15:45.958 "superblock": true, 00:15:45.958 "num_base_bdevs": 4, 00:15:45.958 "num_base_bdevs_discovered": 4, 00:15:45.958 "num_base_bdevs_operational": 4, 00:15:45.958 "base_bdevs_list": [ 00:15:45.958 { 00:15:45.958 "name": "pt1", 00:15:45.958 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:45.958 "is_configured": true, 00:15:45.958 "data_offset": 2048, 00:15:45.958 "data_size": 63488 00:15:45.958 }, 00:15:45.958 { 00:15:45.958 "name": "pt2", 00:15:45.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:45.958 "is_configured": true, 00:15:45.958 "data_offset": 2048, 00:15:45.958 "data_size": 63488 00:15:45.958 }, 00:15:45.958 { 00:15:45.958 "name": "pt3", 00:15:45.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:45.958 "is_configured": true, 00:15:45.958 "data_offset": 2048, 00:15:45.958 "data_size": 63488 00:15:45.958 }, 00:15:45.958 { 00:15:45.958 "name": "pt4", 00:15:45.958 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:45.958 "is_configured": true, 00:15:45.958 "data_offset": 2048, 00:15:45.958 "data_size": 63488 00:15:45.958 } 00:15:45.958 ] 00:15:45.958 } 00:15:45.958 } 00:15:45.958 }' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:45.958 pt2 00:15:45.958 pt3 00:15:45.958 pt4' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.958 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.959 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:46.219 [2024-11-10 15:24:52.425180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 82efdf7b-9280-4daa-96bc-9c33b69aa940 '!=' 82efdf7b-9280-4daa-96bc-9c33b69aa940 ']' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.219 [2024-11-10 15:24:52.473087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.219 "name": "raid_bdev1", 00:15:46.219 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:46.219 "strip_size_kb": 64, 00:15:46.219 "state": "online", 00:15:46.219 "raid_level": "raid5f", 00:15:46.219 "superblock": true, 00:15:46.219 "num_base_bdevs": 4, 00:15:46.219 "num_base_bdevs_discovered": 3, 00:15:46.219 "num_base_bdevs_operational": 3, 00:15:46.219 "base_bdevs_list": [ 00:15:46.219 { 00:15:46.219 "name": null, 00:15:46.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.219 "is_configured": false, 00:15:46.219 "data_offset": 0, 00:15:46.219 "data_size": 63488 00:15:46.219 }, 00:15:46.219 { 00:15:46.219 "name": "pt2", 00:15:46.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.219 "is_configured": true, 00:15:46.219 "data_offset": 2048, 00:15:46.219 "data_size": 63488 00:15:46.219 }, 00:15:46.219 { 00:15:46.219 "name": "pt3", 00:15:46.219 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.219 "is_configured": true, 00:15:46.219 "data_offset": 2048, 00:15:46.219 "data_size": 63488 00:15:46.219 }, 00:15:46.219 { 00:15:46.219 "name": "pt4", 00:15:46.219 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.219 "is_configured": true, 00:15:46.219 "data_offset": 2048, 00:15:46.219 "data_size": 63488 00:15:46.219 } 00:15:46.219 ] 00:15:46.219 }' 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.219 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.790 [2024-11-10 15:24:52.941163] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.790 [2024-11-10 15:24:52.941237] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.790 [2024-11-10 15:24:52.941337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.790 [2024-11-10 15:24:52.941427] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.790 [2024-11-10 15:24:52.941472] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:46.790 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.791 15:24:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:46.791 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.791 15:24:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.791 [2024-11-10 15:24:53.021173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.791 [2024-11-10 15:24:53.021218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.791 [2024-11-10 15:24:53.021235] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:46.791 [2024-11-10 15:24:53.021243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.791 [2024-11-10 15:24:53.023686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.791 [2024-11-10 15:24:53.023723] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.791 [2024-11-10 15:24:53.023789] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:46.791 [2024-11-10 15:24:53.023827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.791 pt2 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.791 "name": "raid_bdev1", 00:15:46.791 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:46.791 "strip_size_kb": 64, 00:15:46.791 "state": "configuring", 00:15:46.791 "raid_level": "raid5f", 00:15:46.791 "superblock": true, 00:15:46.791 "num_base_bdevs": 4, 00:15:46.791 "num_base_bdevs_discovered": 1, 00:15:46.791 "num_base_bdevs_operational": 3, 00:15:46.791 "base_bdevs_list": [ 00:15:46.791 { 00:15:46.791 "name": null, 00:15:46.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.791 "is_configured": false, 00:15:46.791 "data_offset": 2048, 00:15:46.791 "data_size": 63488 00:15:46.791 }, 00:15:46.791 { 00:15:46.791 "name": "pt2", 00:15:46.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.791 "is_configured": true, 00:15:46.791 "data_offset": 2048, 00:15:46.791 "data_size": 63488 00:15:46.791 }, 00:15:46.791 { 00:15:46.791 "name": null, 00:15:46.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:46.791 "is_configured": false, 00:15:46.791 "data_offset": 2048, 00:15:46.791 "data_size": 63488 00:15:46.791 }, 00:15:46.791 { 00:15:46.791 "name": null, 00:15:46.791 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:46.791 "is_configured": false, 00:15:46.791 "data_offset": 2048, 00:15:46.791 "data_size": 63488 00:15:46.791 } 00:15:46.791 ] 00:15:46.791 }' 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.791 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.361 [2024-11-10 15:24:53.473329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:47.361 [2024-11-10 15:24:53.473416] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.361 [2024-11-10 15:24:53.473472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:47.361 [2024-11-10 15:24:53.473498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.361 [2024-11-10 15:24:53.473891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.361 [2024-11-10 15:24:53.473947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:47.361 [2024-11-10 15:24:53.474055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:47.361 [2024-11-10 15:24:53.474114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:47.361 pt3 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.361 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.361 "name": "raid_bdev1", 00:15:47.361 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:47.361 "strip_size_kb": 64, 00:15:47.361 "state": "configuring", 00:15:47.361 "raid_level": "raid5f", 00:15:47.361 "superblock": true, 00:15:47.361 "num_base_bdevs": 4, 00:15:47.361 "num_base_bdevs_discovered": 2, 00:15:47.361 "num_base_bdevs_operational": 3, 00:15:47.361 "base_bdevs_list": [ 00:15:47.361 { 00:15:47.361 "name": null, 00:15:47.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.361 "is_configured": false, 00:15:47.361 "data_offset": 2048, 00:15:47.361 "data_size": 63488 00:15:47.361 }, 00:15:47.361 { 00:15:47.361 "name": "pt2", 00:15:47.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.361 "is_configured": true, 00:15:47.361 "data_offset": 2048, 00:15:47.361 "data_size": 63488 00:15:47.361 }, 00:15:47.361 { 00:15:47.361 "name": "pt3", 00:15:47.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.361 "is_configured": true, 00:15:47.361 "data_offset": 2048, 00:15:47.362 "data_size": 63488 00:15:47.362 }, 00:15:47.362 { 00:15:47.362 "name": null, 00:15:47.362 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.362 "is_configured": false, 00:15:47.362 "data_offset": 2048, 00:15:47.362 "data_size": 63488 00:15:47.362 } 00:15:47.362 ] 00:15:47.362 }' 00:15:47.362 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.362 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.622 [2024-11-10 15:24:53.965446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:47.622 [2024-11-10 15:24:53.965494] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.622 [2024-11-10 15:24:53.965513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:47.622 [2024-11-10 15:24:53.965521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.622 [2024-11-10 15:24:53.965889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.622 [2024-11-10 15:24:53.965904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:47.622 [2024-11-10 15:24:53.965960] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:47.622 [2024-11-10 15:24:53.965979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:47.622 [2024-11-10 15:24:53.966091] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:47.622 [2024-11-10 15:24:53.966100] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:47.622 [2024-11-10 15:24:53.966370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:15:47.622 [2024-11-10 15:24:53.966958] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:47.622 [2024-11-10 15:24:53.966980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:47.622 [2024-11-10 15:24:53.967227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.622 pt4 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.622 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.882 15:24:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.882 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.882 "name": "raid_bdev1", 00:15:47.882 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:47.882 "strip_size_kb": 64, 00:15:47.882 "state": "online", 00:15:47.882 "raid_level": "raid5f", 00:15:47.882 "superblock": true, 00:15:47.882 "num_base_bdevs": 4, 00:15:47.882 "num_base_bdevs_discovered": 3, 00:15:47.882 "num_base_bdevs_operational": 3, 00:15:47.882 "base_bdevs_list": [ 00:15:47.882 { 00:15:47.882 "name": null, 00:15:47.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.882 "is_configured": false, 00:15:47.882 "data_offset": 2048, 00:15:47.882 "data_size": 63488 00:15:47.882 }, 00:15:47.882 { 00:15:47.882 "name": "pt2", 00:15:47.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.882 "is_configured": true, 00:15:47.882 "data_offset": 2048, 00:15:47.882 "data_size": 63488 00:15:47.882 }, 00:15:47.882 { 00:15:47.882 "name": "pt3", 00:15:47.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:47.882 "is_configured": true, 00:15:47.882 "data_offset": 2048, 00:15:47.882 "data_size": 63488 00:15:47.882 }, 00:15:47.882 { 00:15:47.882 "name": "pt4", 00:15:47.882 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:47.882 "is_configured": true, 00:15:47.882 "data_offset": 2048, 00:15:47.882 "data_size": 63488 00:15:47.882 } 00:15:47.882 ] 00:15:47.882 }' 00:15:47.882 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.882 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 [2024-11-10 15:24:54.438024] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.142 [2024-11-10 15:24:54.438108] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.142 [2024-11-10 15:24:54.438184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.142 [2024-11-10 15:24:54.438284] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.142 [2024-11-10 15:24:54.438336] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.142 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 [2024-11-10 15:24:54.510076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.403 [2024-11-10 15:24:54.510193] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.403 [2024-11-10 15:24:54.510214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:48.403 [2024-11-10 15:24:54.510226] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.403 [2024-11-10 15:24:54.512736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.403 [2024-11-10 15:24:54.512777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.403 [2024-11-10 15:24:54.512837] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:48.403 [2024-11-10 15:24:54.512882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.403 [2024-11-10 15:24:54.512999] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:48.403 [2024-11-10 15:24:54.513011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.403 [2024-11-10 15:24:54.513044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:15:48.403 [2024-11-10 15:24:54.513088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.403 [2024-11-10 15:24:54.513176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.403 pt1 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.403 "name": "raid_bdev1", 00:15:48.403 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:48.403 "strip_size_kb": 64, 00:15:48.403 "state": "configuring", 00:15:48.403 "raid_level": "raid5f", 00:15:48.403 "superblock": true, 00:15:48.403 "num_base_bdevs": 4, 00:15:48.403 "num_base_bdevs_discovered": 2, 00:15:48.403 "num_base_bdevs_operational": 3, 00:15:48.403 "base_bdevs_list": [ 00:15:48.403 { 00:15:48.403 "name": null, 00:15:48.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.403 "is_configured": false, 00:15:48.403 "data_offset": 2048, 00:15:48.403 "data_size": 63488 00:15:48.403 }, 00:15:48.403 { 00:15:48.403 "name": "pt2", 00:15:48.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.403 "is_configured": true, 00:15:48.403 "data_offset": 2048, 00:15:48.403 "data_size": 63488 00:15:48.403 }, 00:15:48.403 { 00:15:48.403 "name": "pt3", 00:15:48.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.403 "is_configured": true, 00:15:48.403 "data_offset": 2048, 00:15:48.403 "data_size": 63488 00:15:48.403 }, 00:15:48.403 { 00:15:48.403 "name": null, 00:15:48.403 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.403 "is_configured": false, 00:15:48.403 "data_offset": 2048, 00:15:48.403 "data_size": 63488 00:15:48.403 } 00:15:48.403 ] 00:15:48.403 }' 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.403 15:24:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.663 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:48.663 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:48.663 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.663 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.663 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.924 [2024-11-10 15:24:55.054198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:48.924 [2024-11-10 15:24:55.054305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.924 [2024-11-10 15:24:55.054343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:48.924 [2024-11-10 15:24:55.054383] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.924 [2024-11-10 15:24:55.054775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.924 [2024-11-10 15:24:55.054838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:48.924 [2024-11-10 15:24:55.054936] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:48.924 [2024-11-10 15:24:55.054985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:48.924 [2024-11-10 15:24:55.055116] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:15:48.924 [2024-11-10 15:24:55.055154] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:48.924 [2024-11-10 15:24:55.055451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:48.924 [2024-11-10 15:24:55.056059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:15:48.924 [2024-11-10 15:24:55.056114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:15:48.924 [2024-11-10 15:24:55.056336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.924 pt4 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.924 "name": "raid_bdev1", 00:15:48.924 "uuid": "82efdf7b-9280-4daa-96bc-9c33b69aa940", 00:15:48.924 "strip_size_kb": 64, 00:15:48.924 "state": "online", 00:15:48.924 "raid_level": "raid5f", 00:15:48.924 "superblock": true, 00:15:48.924 "num_base_bdevs": 4, 00:15:48.924 "num_base_bdevs_discovered": 3, 00:15:48.924 "num_base_bdevs_operational": 3, 00:15:48.924 "base_bdevs_list": [ 00:15:48.924 { 00:15:48.924 "name": null, 00:15:48.924 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.924 "is_configured": false, 00:15:48.924 "data_offset": 2048, 00:15:48.924 "data_size": 63488 00:15:48.924 }, 00:15:48.924 { 00:15:48.924 "name": "pt2", 00:15:48.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.924 "is_configured": true, 00:15:48.924 "data_offset": 2048, 00:15:48.924 "data_size": 63488 00:15:48.924 }, 00:15:48.924 { 00:15:48.924 "name": "pt3", 00:15:48.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.924 "is_configured": true, 00:15:48.924 "data_offset": 2048, 00:15:48.924 "data_size": 63488 00:15:48.924 }, 00:15:48.924 { 00:15:48.924 "name": "pt4", 00:15:48.924 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.924 "is_configured": true, 00:15:48.924 "data_offset": 2048, 00:15:48.924 "data_size": 63488 00:15:48.924 } 00:15:48.924 ] 00:15:48.924 }' 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.924 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:49.185 [2024-11-10 15:24:55.523117] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.185 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 82efdf7b-9280-4daa-96bc-9c33b69aa940 '!=' 82efdf7b-9280-4daa-96bc-9c33b69aa940 ']' 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 95951 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@952 -- # '[' -z 95951 ']' 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # kill -0 95951 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # uname 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 95951 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:49.445 killing process with pid 95951 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 95951' 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@971 -- # kill 95951 00:15:49.445 [2024-11-10 15:24:55.608492] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:49.445 [2024-11-10 15:24:55.608569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.445 [2024-11-10 15:24:55.608643] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.445 [2024-11-10 15:24:55.608656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:15:49.445 15:24:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@976 -- # wait 95951 00:15:49.445 [2024-11-10 15:24:55.689393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.705 15:24:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:49.705 00:15:49.705 real 0m7.398s 00:15:49.705 user 0m12.231s 00:15:49.705 sys 0m1.678s 00:15:49.705 15:24:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.705 ************************************ 00:15:49.705 END TEST raid5f_superblock_test 00:15:49.705 ************************************ 00:15:49.705 15:24:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.966 15:24:56 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:49.966 15:24:56 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:49.966 15:24:56 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:15:49.966 15:24:56 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.966 15:24:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.966 ************************************ 00:15:49.966 START TEST raid5f_rebuild_test 00:15:49.966 ************************************ 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 false false true 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=96425 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 96425 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@833 -- # '[' -z 96425 ']' 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:49.966 15:24:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.966 [2024-11-10 15:24:56.220850] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:15:49.966 [2024-11-10 15:24:56.221129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:49.966 Zero copy mechanism will not be used. 00:15:49.966 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96425 ] 00:15:50.231 [2024-11-10 15:24:56.360697] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:50.231 [2024-11-10 15:24:56.398576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.231 [2024-11-10 15:24:56.439496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.231 [2024-11-10 15:24:56.519806] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.231 [2024-11-10 15:24:56.519917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@866 -- # return 0 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.863 BaseBdev1_malloc 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.863 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.863 [2024-11-10 15:24:57.080862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:50.863 [2024-11-10 15:24:57.080942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.863 [2024-11-10 15:24:57.080971] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:50.863 [2024-11-10 15:24:57.080986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.864 [2024-11-10 15:24:57.083564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.864 [2024-11-10 15:24:57.083643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:50.864 BaseBdev1 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.864 BaseBdev2_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.864 [2024-11-10 15:24:57.115514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:50.864 [2024-11-10 15:24:57.115632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.864 [2024-11-10 15:24:57.115670] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:50.864 [2024-11-10 15:24:57.115704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.864 [2024-11-10 15:24:57.118110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.864 [2024-11-10 15:24:57.118186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:50.864 BaseBdev2 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.864 BaseBdev3_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.864 [2024-11-10 15:24:57.149926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:50.864 [2024-11-10 15:24:57.150057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.864 [2024-11-10 15:24:57.150098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:50.864 [2024-11-10 15:24:57.150150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.864 [2024-11-10 15:24:57.152508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.864 [2024-11-10 15:24:57.152610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:50.864 BaseBdev3 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.864 BaseBdev4_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.864 [2024-11-10 15:24:57.200340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:50.864 [2024-11-10 15:24:57.200528] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.864 [2024-11-10 15:24:57.200592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:50.864 [2024-11-10 15:24:57.200677] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.864 [2024-11-10 15:24:57.204617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.864 [2024-11-10 15:24:57.204740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:50.864 BaseBdev4 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.864 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 spare_malloc 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 spare_delay 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 [2024-11-10 15:24:57.249311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.124 [2024-11-10 15:24:57.249433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.124 [2024-11-10 15:24:57.249472] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:51.124 [2024-11-10 15:24:57.249504] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.124 [2024-11-10 15:24:57.251902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.124 [2024-11-10 15:24:57.251979] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.124 spare 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 [2024-11-10 15:24:57.261395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.124 [2024-11-10 15:24:57.263568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.124 [2024-11-10 15:24:57.263680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.124 [2024-11-10 15:24:57.263741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:51.124 [2024-11-10 15:24:57.263844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:15:51.124 [2024-11-10 15:24:57.263866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:51.124 [2024-11-10 15:24:57.264137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:51.124 [2024-11-10 15:24:57.264616] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:15:51.124 [2024-11-10 15:24:57.264627] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:15:51.124 [2024-11-10 15:24:57.264760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.124 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.124 "name": "raid_bdev1", 00:15:51.124 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:51.124 "strip_size_kb": 64, 00:15:51.124 "state": "online", 00:15:51.124 "raid_level": "raid5f", 00:15:51.124 "superblock": false, 00:15:51.124 "num_base_bdevs": 4, 00:15:51.124 "num_base_bdevs_discovered": 4, 00:15:51.124 "num_base_bdevs_operational": 4, 00:15:51.124 "base_bdevs_list": [ 00:15:51.124 { 00:15:51.124 "name": "BaseBdev1", 00:15:51.124 "uuid": "87d8b38e-b5db-5d09-9080-67d7c696815e", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.124 { 00:15:51.124 "name": "BaseBdev2", 00:15:51.124 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.124 { 00:15:51.124 "name": "BaseBdev3", 00:15:51.124 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:51.124 "is_configured": true, 00:15:51.124 "data_offset": 0, 00:15:51.124 "data_size": 65536 00:15:51.124 }, 00:15:51.125 { 00:15:51.125 "name": "BaseBdev4", 00:15:51.125 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:51.125 "is_configured": true, 00:15:51.125 "data_offset": 0, 00:15:51.125 "data_size": 65536 00:15:51.125 } 00:15:51.125 ] 00:15:51.125 }' 00:15:51.125 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.125 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.385 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.385 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:51.385 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.385 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.385 [2024-11-10 15:24:57.720077] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.385 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.645 15:24:57 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:51.645 [2024-11-10 15:24:57.992066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:15:51.905 /dev/nbd0 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.905 1+0 records in 00:15:51.905 1+0 records out 00:15:51.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469818 s, 8.7 MB/s 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:51.905 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:52.476 512+0 records in 00:15:52.476 512+0 records out 00:15:52.476 100663296 bytes (101 MB, 96 MiB) copied, 0.720755 s, 140 MB/s 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:52.476 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:52.736 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:52.736 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:52.736 [2024-11-10 15:24:59.001195] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.736 15:24:58 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.736 [2024-11-10 15:24:59.017299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.736 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.736 "name": "raid_bdev1", 00:15:52.736 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:52.736 "strip_size_kb": 64, 00:15:52.736 "state": "online", 00:15:52.736 "raid_level": "raid5f", 00:15:52.736 "superblock": false, 00:15:52.736 "num_base_bdevs": 4, 00:15:52.736 "num_base_bdevs_discovered": 3, 00:15:52.736 "num_base_bdevs_operational": 3, 00:15:52.736 "base_bdevs_list": [ 00:15:52.736 { 00:15:52.736 "name": null, 00:15:52.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.736 "is_configured": false, 00:15:52.736 "data_offset": 0, 00:15:52.736 "data_size": 65536 00:15:52.736 }, 00:15:52.736 { 00:15:52.736 "name": "BaseBdev2", 00:15:52.736 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:52.736 "is_configured": true, 00:15:52.736 "data_offset": 0, 00:15:52.736 "data_size": 65536 00:15:52.736 }, 00:15:52.736 { 00:15:52.736 "name": "BaseBdev3", 00:15:52.736 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:52.736 "is_configured": true, 00:15:52.736 "data_offset": 0, 00:15:52.736 "data_size": 65536 00:15:52.736 }, 00:15:52.736 { 00:15:52.737 "name": "BaseBdev4", 00:15:52.737 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:52.737 "is_configured": true, 00:15:52.737 "data_offset": 0, 00:15:52.737 "data_size": 65536 00:15:52.737 } 00:15:52.737 ] 00:15:52.737 }' 00:15:52.737 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.737 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.307 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.307 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.307 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.307 [2024-11-10 15:24:59.473402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.307 [2024-11-10 15:24:59.480706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bb60 00:15:53.307 15:24:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.307 15:24:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:53.307 [2024-11-10 15:24:59.483248] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.245 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.245 "name": "raid_bdev1", 00:15:54.245 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:54.245 "strip_size_kb": 64, 00:15:54.245 "state": "online", 00:15:54.245 "raid_level": "raid5f", 00:15:54.245 "superblock": false, 00:15:54.245 "num_base_bdevs": 4, 00:15:54.245 "num_base_bdevs_discovered": 4, 00:15:54.245 "num_base_bdevs_operational": 4, 00:15:54.245 "process": { 00:15:54.245 "type": "rebuild", 00:15:54.245 "target": "spare", 00:15:54.245 "progress": { 00:15:54.245 "blocks": 19200, 00:15:54.245 "percent": 9 00:15:54.245 } 00:15:54.245 }, 00:15:54.245 "base_bdevs_list": [ 00:15:54.245 { 00:15:54.245 "name": "spare", 00:15:54.245 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:15:54.245 "is_configured": true, 00:15:54.245 "data_offset": 0, 00:15:54.245 "data_size": 65536 00:15:54.245 }, 00:15:54.245 { 00:15:54.245 "name": "BaseBdev2", 00:15:54.245 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:54.245 "is_configured": true, 00:15:54.245 "data_offset": 0, 00:15:54.245 "data_size": 65536 00:15:54.245 }, 00:15:54.245 { 00:15:54.245 "name": "BaseBdev3", 00:15:54.245 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:54.245 "is_configured": true, 00:15:54.245 "data_offset": 0, 00:15:54.245 "data_size": 65536 00:15:54.245 }, 00:15:54.245 { 00:15:54.246 "name": "BaseBdev4", 00:15:54.246 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:54.246 "is_configured": true, 00:15:54.246 "data_offset": 0, 00:15:54.246 "data_size": 65536 00:15:54.246 } 00:15:54.246 ] 00:15:54.246 }' 00:15:54.246 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.246 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.246 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.506 [2024-11-10 15:25:00.640878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.506 [2024-11-10 15:25:00.691876] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.506 [2024-11-10 15:25:00.692006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.506 [2024-11-10 15:25:00.692059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.506 [2024-11-10 15:25:00.692079] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.506 "name": "raid_bdev1", 00:15:54.506 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:54.506 "strip_size_kb": 64, 00:15:54.506 "state": "online", 00:15:54.506 "raid_level": "raid5f", 00:15:54.506 "superblock": false, 00:15:54.506 "num_base_bdevs": 4, 00:15:54.506 "num_base_bdevs_discovered": 3, 00:15:54.506 "num_base_bdevs_operational": 3, 00:15:54.506 "base_bdevs_list": [ 00:15:54.506 { 00:15:54.506 "name": null, 00:15:54.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.506 "is_configured": false, 00:15:54.506 "data_offset": 0, 00:15:54.506 "data_size": 65536 00:15:54.506 }, 00:15:54.506 { 00:15:54.506 "name": "BaseBdev2", 00:15:54.506 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:54.506 "is_configured": true, 00:15:54.506 "data_offset": 0, 00:15:54.506 "data_size": 65536 00:15:54.506 }, 00:15:54.506 { 00:15:54.506 "name": "BaseBdev3", 00:15:54.506 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:54.506 "is_configured": true, 00:15:54.506 "data_offset": 0, 00:15:54.506 "data_size": 65536 00:15:54.506 }, 00:15:54.506 { 00:15:54.506 "name": "BaseBdev4", 00:15:54.506 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:54.506 "is_configured": true, 00:15:54.506 "data_offset": 0, 00:15:54.506 "data_size": 65536 00:15:54.506 } 00:15:54.506 ] 00:15:54.506 }' 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.506 15:25:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.077 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.077 "name": "raid_bdev1", 00:15:55.077 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:55.077 "strip_size_kb": 64, 00:15:55.077 "state": "online", 00:15:55.077 "raid_level": "raid5f", 00:15:55.077 "superblock": false, 00:15:55.077 "num_base_bdevs": 4, 00:15:55.077 "num_base_bdevs_discovered": 3, 00:15:55.077 "num_base_bdevs_operational": 3, 00:15:55.077 "base_bdevs_list": [ 00:15:55.077 { 00:15:55.077 "name": null, 00:15:55.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.077 "is_configured": false, 00:15:55.077 "data_offset": 0, 00:15:55.077 "data_size": 65536 00:15:55.078 }, 00:15:55.078 { 00:15:55.078 "name": "BaseBdev2", 00:15:55.078 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:55.078 "is_configured": true, 00:15:55.078 "data_offset": 0, 00:15:55.078 "data_size": 65536 00:15:55.078 }, 00:15:55.078 { 00:15:55.078 "name": "BaseBdev3", 00:15:55.078 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:55.078 "is_configured": true, 00:15:55.078 "data_offset": 0, 00:15:55.078 "data_size": 65536 00:15:55.078 }, 00:15:55.078 { 00:15:55.078 "name": "BaseBdev4", 00:15:55.078 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:55.078 "is_configured": true, 00:15:55.078 "data_offset": 0, 00:15:55.078 "data_size": 65536 00:15:55.078 } 00:15:55.078 ] 00:15:55.078 }' 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.078 [2024-11-10 15:25:01.293524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:55.078 [2024-11-10 15:25:01.299063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002bc30 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.078 15:25:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:55.078 [2024-11-10 15:25:01.301676] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.019 "name": "raid_bdev1", 00:15:56.019 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:56.019 "strip_size_kb": 64, 00:15:56.019 "state": "online", 00:15:56.019 "raid_level": "raid5f", 00:15:56.019 "superblock": false, 00:15:56.019 "num_base_bdevs": 4, 00:15:56.019 "num_base_bdevs_discovered": 4, 00:15:56.019 "num_base_bdevs_operational": 4, 00:15:56.019 "process": { 00:15:56.019 "type": "rebuild", 00:15:56.019 "target": "spare", 00:15:56.019 "progress": { 00:15:56.019 "blocks": 19200, 00:15:56.019 "percent": 9 00:15:56.019 } 00:15:56.019 }, 00:15:56.019 "base_bdevs_list": [ 00:15:56.019 { 00:15:56.019 "name": "spare", 00:15:56.019 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:15:56.019 "is_configured": true, 00:15:56.019 "data_offset": 0, 00:15:56.019 "data_size": 65536 00:15:56.019 }, 00:15:56.019 { 00:15:56.019 "name": "BaseBdev2", 00:15:56.019 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:56.019 "is_configured": true, 00:15:56.019 "data_offset": 0, 00:15:56.019 "data_size": 65536 00:15:56.019 }, 00:15:56.019 { 00:15:56.019 "name": "BaseBdev3", 00:15:56.019 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:56.019 "is_configured": true, 00:15:56.019 "data_offset": 0, 00:15:56.019 "data_size": 65536 00:15:56.019 }, 00:15:56.019 { 00:15:56.019 "name": "BaseBdev4", 00:15:56.019 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:56.019 "is_configured": true, 00:15:56.019 "data_offset": 0, 00:15:56.019 "data_size": 65536 00:15:56.019 } 00:15:56.019 ] 00:15:56.019 }' 00:15:56.019 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=516 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.280 "name": "raid_bdev1", 00:15:56.280 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:56.280 "strip_size_kb": 64, 00:15:56.280 "state": "online", 00:15:56.280 "raid_level": "raid5f", 00:15:56.280 "superblock": false, 00:15:56.280 "num_base_bdevs": 4, 00:15:56.280 "num_base_bdevs_discovered": 4, 00:15:56.280 "num_base_bdevs_operational": 4, 00:15:56.280 "process": { 00:15:56.280 "type": "rebuild", 00:15:56.280 "target": "spare", 00:15:56.280 "progress": { 00:15:56.280 "blocks": 21120, 00:15:56.280 "percent": 10 00:15:56.280 } 00:15:56.280 }, 00:15:56.280 "base_bdevs_list": [ 00:15:56.280 { 00:15:56.280 "name": "spare", 00:15:56.280 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:15:56.280 "is_configured": true, 00:15:56.280 "data_offset": 0, 00:15:56.280 "data_size": 65536 00:15:56.280 }, 00:15:56.280 { 00:15:56.280 "name": "BaseBdev2", 00:15:56.280 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:56.280 "is_configured": true, 00:15:56.280 "data_offset": 0, 00:15:56.280 "data_size": 65536 00:15:56.280 }, 00:15:56.280 { 00:15:56.280 "name": "BaseBdev3", 00:15:56.280 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:56.280 "is_configured": true, 00:15:56.280 "data_offset": 0, 00:15:56.280 "data_size": 65536 00:15:56.280 }, 00:15:56.280 { 00:15:56.280 "name": "BaseBdev4", 00:15:56.280 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:56.280 "is_configured": true, 00:15:56.280 "data_offset": 0, 00:15:56.280 "data_size": 65536 00:15:56.280 } 00:15:56.280 ] 00:15:56.280 }' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.280 15:25:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.231 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.232 15:25:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.492 "name": "raid_bdev1", 00:15:57.492 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:57.492 "strip_size_kb": 64, 00:15:57.492 "state": "online", 00:15:57.492 "raid_level": "raid5f", 00:15:57.492 "superblock": false, 00:15:57.492 "num_base_bdevs": 4, 00:15:57.492 "num_base_bdevs_discovered": 4, 00:15:57.492 "num_base_bdevs_operational": 4, 00:15:57.492 "process": { 00:15:57.492 "type": "rebuild", 00:15:57.492 "target": "spare", 00:15:57.492 "progress": { 00:15:57.492 "blocks": 42240, 00:15:57.492 "percent": 21 00:15:57.492 } 00:15:57.492 }, 00:15:57.492 "base_bdevs_list": [ 00:15:57.492 { 00:15:57.492 "name": "spare", 00:15:57.492 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:15:57.492 "is_configured": true, 00:15:57.492 "data_offset": 0, 00:15:57.492 "data_size": 65536 00:15:57.492 }, 00:15:57.492 { 00:15:57.492 "name": "BaseBdev2", 00:15:57.492 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:57.492 "is_configured": true, 00:15:57.492 "data_offset": 0, 00:15:57.492 "data_size": 65536 00:15:57.492 }, 00:15:57.492 { 00:15:57.492 "name": "BaseBdev3", 00:15:57.492 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:57.492 "is_configured": true, 00:15:57.492 "data_offset": 0, 00:15:57.492 "data_size": 65536 00:15:57.492 }, 00:15:57.492 { 00:15:57.492 "name": "BaseBdev4", 00:15:57.492 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:57.492 "is_configured": true, 00:15:57.492 "data_offset": 0, 00:15:57.492 "data_size": 65536 00:15:57.492 } 00:15:57.492 ] 00:15:57.492 }' 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.492 15:25:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.433 15:25:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.693 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.693 "name": "raid_bdev1", 00:15:58.693 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:58.693 "strip_size_kb": 64, 00:15:58.693 "state": "online", 00:15:58.693 "raid_level": "raid5f", 00:15:58.693 "superblock": false, 00:15:58.693 "num_base_bdevs": 4, 00:15:58.693 "num_base_bdevs_discovered": 4, 00:15:58.693 "num_base_bdevs_operational": 4, 00:15:58.693 "process": { 00:15:58.693 "type": "rebuild", 00:15:58.693 "target": "spare", 00:15:58.693 "progress": { 00:15:58.693 "blocks": 65280, 00:15:58.693 "percent": 33 00:15:58.693 } 00:15:58.693 }, 00:15:58.693 "base_bdevs_list": [ 00:15:58.693 { 00:15:58.693 "name": "spare", 00:15:58.693 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 0, 00:15:58.693 "data_size": 65536 00:15:58.693 }, 00:15:58.693 { 00:15:58.693 "name": "BaseBdev2", 00:15:58.693 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 0, 00:15:58.693 "data_size": 65536 00:15:58.693 }, 00:15:58.693 { 00:15:58.693 "name": "BaseBdev3", 00:15:58.693 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 0, 00:15:58.693 "data_size": 65536 00:15:58.693 }, 00:15:58.693 { 00:15:58.693 "name": "BaseBdev4", 00:15:58.693 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:58.693 "is_configured": true, 00:15:58.693 "data_offset": 0, 00:15:58.693 "data_size": 65536 00:15:58.693 } 00:15:58.693 ] 00:15:58.693 }' 00:15:58.693 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.693 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.693 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.693 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.693 15:25:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.632 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.632 "name": "raid_bdev1", 00:15:59.632 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:15:59.632 "strip_size_kb": 64, 00:15:59.632 "state": "online", 00:15:59.632 "raid_level": "raid5f", 00:15:59.632 "superblock": false, 00:15:59.632 "num_base_bdevs": 4, 00:15:59.632 "num_base_bdevs_discovered": 4, 00:15:59.632 "num_base_bdevs_operational": 4, 00:15:59.632 "process": { 00:15:59.632 "type": "rebuild", 00:15:59.632 "target": "spare", 00:15:59.632 "progress": { 00:15:59.632 "blocks": 86400, 00:15:59.632 "percent": 43 00:15:59.632 } 00:15:59.632 }, 00:15:59.632 "base_bdevs_list": [ 00:15:59.632 { 00:15:59.633 "name": "spare", 00:15:59.633 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:15:59.633 "is_configured": true, 00:15:59.633 "data_offset": 0, 00:15:59.633 "data_size": 65536 00:15:59.633 }, 00:15:59.633 { 00:15:59.633 "name": "BaseBdev2", 00:15:59.633 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:15:59.633 "is_configured": true, 00:15:59.633 "data_offset": 0, 00:15:59.633 "data_size": 65536 00:15:59.633 }, 00:15:59.633 { 00:15:59.633 "name": "BaseBdev3", 00:15:59.633 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:15:59.633 "is_configured": true, 00:15:59.633 "data_offset": 0, 00:15:59.633 "data_size": 65536 00:15:59.633 }, 00:15:59.633 { 00:15:59.633 "name": "BaseBdev4", 00:15:59.633 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:15:59.633 "is_configured": true, 00:15:59.633 "data_offset": 0, 00:15:59.633 "data_size": 65536 00:15:59.633 } 00:15:59.633 ] 00:15:59.633 }' 00:15:59.633 15:25:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.892 15:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.892 15:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.892 15:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.892 15:25:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.831 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.831 "name": "raid_bdev1", 00:16:00.831 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:00.831 "strip_size_kb": 64, 00:16:00.831 "state": "online", 00:16:00.831 "raid_level": "raid5f", 00:16:00.831 "superblock": false, 00:16:00.831 "num_base_bdevs": 4, 00:16:00.831 "num_base_bdevs_discovered": 4, 00:16:00.831 "num_base_bdevs_operational": 4, 00:16:00.831 "process": { 00:16:00.831 "type": "rebuild", 00:16:00.831 "target": "spare", 00:16:00.831 "progress": { 00:16:00.832 "blocks": 109440, 00:16:00.832 "percent": 55 00:16:00.832 } 00:16:00.832 }, 00:16:00.832 "base_bdevs_list": [ 00:16:00.832 { 00:16:00.832 "name": "spare", 00:16:00.832 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:00.832 "is_configured": true, 00:16:00.832 "data_offset": 0, 00:16:00.832 "data_size": 65536 00:16:00.832 }, 00:16:00.832 { 00:16:00.832 "name": "BaseBdev2", 00:16:00.832 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:00.832 "is_configured": true, 00:16:00.832 "data_offset": 0, 00:16:00.832 "data_size": 65536 00:16:00.832 }, 00:16:00.832 { 00:16:00.832 "name": "BaseBdev3", 00:16:00.832 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:00.832 "is_configured": true, 00:16:00.832 "data_offset": 0, 00:16:00.832 "data_size": 65536 00:16:00.832 }, 00:16:00.832 { 00:16:00.832 "name": "BaseBdev4", 00:16:00.832 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:00.832 "is_configured": true, 00:16:00.832 "data_offset": 0, 00:16:00.832 "data_size": 65536 00:16:00.832 } 00:16:00.832 ] 00:16:00.832 }' 00:16:00.832 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.832 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.832 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.091 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.091 15:25:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.031 "name": "raid_bdev1", 00:16:02.031 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:02.031 "strip_size_kb": 64, 00:16:02.031 "state": "online", 00:16:02.031 "raid_level": "raid5f", 00:16:02.031 "superblock": false, 00:16:02.031 "num_base_bdevs": 4, 00:16:02.031 "num_base_bdevs_discovered": 4, 00:16:02.031 "num_base_bdevs_operational": 4, 00:16:02.031 "process": { 00:16:02.031 "type": "rebuild", 00:16:02.031 "target": "spare", 00:16:02.031 "progress": { 00:16:02.031 "blocks": 130560, 00:16:02.031 "percent": 66 00:16:02.031 } 00:16:02.031 }, 00:16:02.031 "base_bdevs_list": [ 00:16:02.031 { 00:16:02.031 "name": "spare", 00:16:02.031 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:02.031 "is_configured": true, 00:16:02.031 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 }, 00:16:02.031 { 00:16:02.031 "name": "BaseBdev2", 00:16:02.031 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:02.031 "is_configured": true, 00:16:02.031 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 }, 00:16:02.031 { 00:16:02.031 "name": "BaseBdev3", 00:16:02.031 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:02.031 "is_configured": true, 00:16:02.031 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 }, 00:16:02.031 { 00:16:02.031 "name": "BaseBdev4", 00:16:02.031 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:02.031 "is_configured": true, 00:16:02.031 "data_offset": 0, 00:16:02.031 "data_size": 65536 00:16:02.031 } 00:16:02.031 ] 00:16:02.031 }' 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.031 15:25:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.428 "name": "raid_bdev1", 00:16:03.428 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:03.428 "strip_size_kb": 64, 00:16:03.428 "state": "online", 00:16:03.428 "raid_level": "raid5f", 00:16:03.428 "superblock": false, 00:16:03.428 "num_base_bdevs": 4, 00:16:03.428 "num_base_bdevs_discovered": 4, 00:16:03.428 "num_base_bdevs_operational": 4, 00:16:03.428 "process": { 00:16:03.428 "type": "rebuild", 00:16:03.428 "target": "spare", 00:16:03.428 "progress": { 00:16:03.428 "blocks": 153600, 00:16:03.428 "percent": 78 00:16:03.428 } 00:16:03.428 }, 00:16:03.428 "base_bdevs_list": [ 00:16:03.428 { 00:16:03.428 "name": "spare", 00:16:03.428 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:03.428 "is_configured": true, 00:16:03.428 "data_offset": 0, 00:16:03.428 "data_size": 65536 00:16:03.428 }, 00:16:03.428 { 00:16:03.428 "name": "BaseBdev2", 00:16:03.428 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:03.428 "is_configured": true, 00:16:03.428 "data_offset": 0, 00:16:03.428 "data_size": 65536 00:16:03.428 }, 00:16:03.428 { 00:16:03.428 "name": "BaseBdev3", 00:16:03.428 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:03.428 "is_configured": true, 00:16:03.428 "data_offset": 0, 00:16:03.428 "data_size": 65536 00:16:03.428 }, 00:16:03.428 { 00:16:03.428 "name": "BaseBdev4", 00:16:03.428 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:03.428 "is_configured": true, 00:16:03.428 "data_offset": 0, 00:16:03.428 "data_size": 65536 00:16:03.428 } 00:16:03.428 ] 00:16:03.428 }' 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.428 15:25:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.366 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.366 "name": "raid_bdev1", 00:16:04.366 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:04.366 "strip_size_kb": 64, 00:16:04.366 "state": "online", 00:16:04.366 "raid_level": "raid5f", 00:16:04.366 "superblock": false, 00:16:04.366 "num_base_bdevs": 4, 00:16:04.366 "num_base_bdevs_discovered": 4, 00:16:04.366 "num_base_bdevs_operational": 4, 00:16:04.366 "process": { 00:16:04.366 "type": "rebuild", 00:16:04.366 "target": "spare", 00:16:04.367 "progress": { 00:16:04.367 "blocks": 174720, 00:16:04.367 "percent": 88 00:16:04.367 } 00:16:04.367 }, 00:16:04.367 "base_bdevs_list": [ 00:16:04.367 { 00:16:04.367 "name": "spare", 00:16:04.367 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:04.367 "is_configured": true, 00:16:04.367 "data_offset": 0, 00:16:04.367 "data_size": 65536 00:16:04.367 }, 00:16:04.367 { 00:16:04.367 "name": "BaseBdev2", 00:16:04.367 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:04.367 "is_configured": true, 00:16:04.367 "data_offset": 0, 00:16:04.367 "data_size": 65536 00:16:04.367 }, 00:16:04.367 { 00:16:04.367 "name": "BaseBdev3", 00:16:04.367 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:04.367 "is_configured": true, 00:16:04.367 "data_offset": 0, 00:16:04.367 "data_size": 65536 00:16:04.367 }, 00:16:04.367 { 00:16:04.367 "name": "BaseBdev4", 00:16:04.367 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:04.367 "is_configured": true, 00:16:04.367 "data_offset": 0, 00:16:04.367 "data_size": 65536 00:16:04.367 } 00:16:04.367 ] 00:16:04.367 }' 00:16:04.367 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.367 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.367 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.367 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.367 15:25:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.748 [2024-11-10 15:25:11.668051] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:05.748 [2024-11-10 15:25:11.668207] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:05.748 [2024-11-10 15:25:11.668294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.748 "name": "raid_bdev1", 00:16:05.748 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:05.748 "strip_size_kb": 64, 00:16:05.748 "state": "online", 00:16:05.748 "raid_level": "raid5f", 00:16:05.748 "superblock": false, 00:16:05.748 "num_base_bdevs": 4, 00:16:05.748 "num_base_bdevs_discovered": 4, 00:16:05.748 "num_base_bdevs_operational": 4, 00:16:05.748 "base_bdevs_list": [ 00:16:05.748 { 00:16:05.748 "name": "spare", 00:16:05.748 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev2", 00:16:05.748 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev3", 00:16:05.748 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev4", 00:16:05.748 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 } 00:16:05.748 ] 00:16:05.748 }' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.748 "name": "raid_bdev1", 00:16:05.748 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:05.748 "strip_size_kb": 64, 00:16:05.748 "state": "online", 00:16:05.748 "raid_level": "raid5f", 00:16:05.748 "superblock": false, 00:16:05.748 "num_base_bdevs": 4, 00:16:05.748 "num_base_bdevs_discovered": 4, 00:16:05.748 "num_base_bdevs_operational": 4, 00:16:05.748 "base_bdevs_list": [ 00:16:05.748 { 00:16:05.748 "name": "spare", 00:16:05.748 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev2", 00:16:05.748 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev3", 00:16:05.748 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 }, 00:16:05.748 { 00:16:05.748 "name": "BaseBdev4", 00:16:05.748 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:05.748 "is_configured": true, 00:16:05.748 "data_offset": 0, 00:16:05.748 "data_size": 65536 00:16:05.748 } 00:16:05.748 ] 00:16:05.748 }' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.748 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.749 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.749 15:25:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.749 15:25:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.749 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.749 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.749 "name": "raid_bdev1", 00:16:05.749 "uuid": "f52e867e-6483-4e36-9bfa-122539da9680", 00:16:05.749 "strip_size_kb": 64, 00:16:05.749 "state": "online", 00:16:05.749 "raid_level": "raid5f", 00:16:05.749 "superblock": false, 00:16:05.749 "num_base_bdevs": 4, 00:16:05.749 "num_base_bdevs_discovered": 4, 00:16:05.749 "num_base_bdevs_operational": 4, 00:16:05.749 "base_bdevs_list": [ 00:16:05.749 { 00:16:05.749 "name": "spare", 00:16:05.749 "uuid": "138fd48b-b631-5664-95db-6e75d0159eeb", 00:16:05.749 "is_configured": true, 00:16:05.749 "data_offset": 0, 00:16:05.749 "data_size": 65536 00:16:05.749 }, 00:16:05.749 { 00:16:05.749 "name": "BaseBdev2", 00:16:05.749 "uuid": "a20e5e6d-4073-5747-9cef-e549c14914d0", 00:16:05.749 "is_configured": true, 00:16:05.749 "data_offset": 0, 00:16:05.749 "data_size": 65536 00:16:05.749 }, 00:16:05.749 { 00:16:05.749 "name": "BaseBdev3", 00:16:05.749 "uuid": "058411b9-7a73-56a1-a968-613c6e5cf8cb", 00:16:05.749 "is_configured": true, 00:16:05.749 "data_offset": 0, 00:16:05.749 "data_size": 65536 00:16:05.749 }, 00:16:05.749 { 00:16:05.749 "name": "BaseBdev4", 00:16:05.749 "uuid": "1aecfd49-00e8-5be1-aa38-73928fdb2d57", 00:16:05.749 "is_configured": true, 00:16:05.749 "data_offset": 0, 00:16:05.749 "data_size": 65536 00:16:05.749 } 00:16:05.749 ] 00:16:05.749 }' 00:16:05.749 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.749 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.318 [2024-11-10 15:25:12.482277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.318 [2024-11-10 15:25:12.482366] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.318 [2024-11-10 15:25:12.482453] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.318 [2024-11-10 15:25:12.482603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.318 [2024-11-10 15:25:12.482615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.318 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:06.577 /dev/nbd0 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.577 1+0 records in 00:16:06.577 1+0 records out 00:16:06.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000299712 s, 13.7 MB/s 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.577 15:25:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:06.836 /dev/nbd1 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # local i 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # break 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:06.836 1+0 records in 00:16:06.836 1+0 records out 00:16:06.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415948 s, 9.8 MB/s 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # size=4096 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # return 0 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.836 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.095 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 96425 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@952 -- # '[' -z 96425 ']' 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # kill -0 96425 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # uname 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96425 00:16:07.355 killing process with pid 96425 00:16:07.355 Received shutdown signal, test time was about 60.000000 seconds 00:16:07.355 00:16:07.355 Latency(us) 00:16:07.355 [2024-11-10T15:25:13.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.355 [2024-11-10T15:25:13.718Z] =================================================================================================================== 00:16:07.355 [2024-11-10T15:25:13.718Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96425' 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@971 -- # kill 96425 00:16:07.355 [2024-11-10 15:25:13.598828] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.355 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@976 -- # wait 96425 00:16:07.355 [2024-11-10 15:25:13.689677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:07.925 15:25:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:07.925 00:16:07.925 real 0m17.890s 00:16:07.925 user 0m21.531s 00:16:07.925 sys 0m2.779s 00:16:07.925 ************************************ 00:16:07.925 END TEST raid5f_rebuild_test 00:16:07.925 ************************************ 00:16:07.925 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:07.925 15:25:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.925 15:25:14 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:07.925 15:25:14 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:07.925 15:25:14 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:07.925 15:25:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.925 ************************************ 00:16:07.925 START TEST raid5f_rebuild_test_sb 00:16:07.925 ************************************ 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid5f 4 true false true 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=96917 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 96917 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@833 -- # '[' -z 96917 ']' 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:07.926 15:25:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.926 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:07.926 Zero copy mechanism will not be used. 00:16:07.926 [2024-11-10 15:25:14.189399] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:16:07.926 [2024-11-10 15:25:14.189574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96917 ] 00:16:08.186 [2024-11-10 15:25:14.323102] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:08.186 [2024-11-10 15:25:14.362347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.186 [2024-11-10 15:25:14.402266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.186 [2024-11-10 15:25:14.481367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.186 [2024-11-10 15:25:14.481411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@866 -- # return 0 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.756 BaseBdev1_malloc 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.756 [2024-11-10 15:25:15.053816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:08.756 [2024-11-10 15:25:15.053969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.756 [2024-11-10 15:25:15.054002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:08.756 [2024-11-10 15:25:15.054020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.756 [2024-11-10 15:25:15.056447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.756 [2024-11-10 15:25:15.056501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:08.756 BaseBdev1 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.756 BaseBdev2_malloc 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.756 [2024-11-10 15:25:15.088743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:08.756 [2024-11-10 15:25:15.088799] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:08.756 [2024-11-10 15:25:15.088819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:08.756 [2024-11-10 15:25:15.088830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:08.756 [2024-11-10 15:25:15.091086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:08.756 [2024-11-10 15:25:15.091167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:08.756 BaseBdev2 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.756 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.018 BaseBdev3_malloc 00:16:09.018 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 [2024-11-10 15:25:15.123650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:09.019 [2024-11-10 15:25:15.123705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.019 [2024-11-10 15:25:15.123729] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:09.019 [2024-11-10 15:25:15.123740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.019 [2024-11-10 15:25:15.126163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.019 [2024-11-10 15:25:15.126246] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:09.019 BaseBdev3 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 BaseBdev4_malloc 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 [2024-11-10 15:25:15.167098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:09.019 [2024-11-10 15:25:15.167154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.019 [2024-11-10 15:25:15.167172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:09.019 [2024-11-10 15:25:15.167183] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.019 [2024-11-10 15:25:15.169625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.019 [2024-11-10 15:25:15.169660] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:09.019 BaseBdev4 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 spare_malloc 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 spare_delay 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 [2024-11-10 15:25:15.214144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.019 [2024-11-10 15:25:15.214205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.019 [2024-11-10 15:25:15.214224] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:09.019 [2024-11-10 15:25:15.214235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.019 [2024-11-10 15:25:15.216636] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.019 [2024-11-10 15:25:15.216676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.019 spare 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 [2024-11-10 15:25:15.226246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:09.019 [2024-11-10 15:25:15.228367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.019 [2024-11-10 15:25:15.228428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:09.019 [2024-11-10 15:25:15.228470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:09.019 [2024-11-10 15:25:15.228652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:09.019 [2024-11-10 15:25:15.228669] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:09.019 [2024-11-10 15:25:15.228927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:09.019 [2024-11-10 15:25:15.229410] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:09.019 [2024-11-10 15:25:15.229421] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:09.019 [2024-11-10 15:25:15.229538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.019 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.019 "name": "raid_bdev1", 00:16:09.019 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:09.019 "strip_size_kb": 64, 00:16:09.019 "state": "online", 00:16:09.019 "raid_level": "raid5f", 00:16:09.019 "superblock": true, 00:16:09.019 "num_base_bdevs": 4, 00:16:09.019 "num_base_bdevs_discovered": 4, 00:16:09.019 "num_base_bdevs_operational": 4, 00:16:09.019 "base_bdevs_list": [ 00:16:09.019 { 00:16:09.019 "name": "BaseBdev1", 00:16:09.019 "uuid": "130d1156-9424-5e16-bb5b-6a17c8157c77", 00:16:09.019 "is_configured": true, 00:16:09.019 "data_offset": 2048, 00:16:09.019 "data_size": 63488 00:16:09.019 }, 00:16:09.019 { 00:16:09.019 "name": "BaseBdev2", 00:16:09.019 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:09.019 "is_configured": true, 00:16:09.019 "data_offset": 2048, 00:16:09.019 "data_size": 63488 00:16:09.019 }, 00:16:09.019 { 00:16:09.019 "name": "BaseBdev3", 00:16:09.019 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:09.019 "is_configured": true, 00:16:09.019 "data_offset": 2048, 00:16:09.019 "data_size": 63488 00:16:09.019 }, 00:16:09.019 { 00:16:09.019 "name": "BaseBdev4", 00:16:09.019 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:09.019 "is_configured": true, 00:16:09.019 "data_offset": 2048, 00:16:09.019 "data_size": 63488 00:16:09.019 } 00:16:09.019 ] 00:16:09.019 }' 00:16:09.020 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.020 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.589 [2024-11-10 15:25:15.712733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.589 15:25:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:09.849 [2024-11-10 15:25:15.972771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:16:09.849 /dev/nbd0 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:09.849 1+0 records in 00:16:09.849 1+0 records out 00:16:09.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051425 s, 8.0 MB/s 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:09.849 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:10.418 496+0 records in 00:16:10.418 496+0 records out 00:16:10.418 97517568 bytes (98 MB, 93 MiB) copied, 0.476347 s, 205 MB/s 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:10.418 [2024-11-10 15:25:16.744236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.418 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.418 [2024-11-10 15:25:16.775920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.678 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.678 "name": "raid_bdev1", 00:16:10.678 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:10.678 "strip_size_kb": 64, 00:16:10.678 "state": "online", 00:16:10.678 "raid_level": "raid5f", 00:16:10.678 "superblock": true, 00:16:10.678 "num_base_bdevs": 4, 00:16:10.678 "num_base_bdevs_discovered": 3, 00:16:10.678 "num_base_bdevs_operational": 3, 00:16:10.678 "base_bdevs_list": [ 00:16:10.678 { 00:16:10.678 "name": null, 00:16:10.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.678 "is_configured": false, 00:16:10.678 "data_offset": 0, 00:16:10.678 "data_size": 63488 00:16:10.678 }, 00:16:10.678 { 00:16:10.678 "name": "BaseBdev2", 00:16:10.678 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:10.678 "is_configured": true, 00:16:10.678 "data_offset": 2048, 00:16:10.678 "data_size": 63488 00:16:10.678 }, 00:16:10.678 { 00:16:10.678 "name": "BaseBdev3", 00:16:10.678 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:10.678 "is_configured": true, 00:16:10.678 "data_offset": 2048, 00:16:10.678 "data_size": 63488 00:16:10.678 }, 00:16:10.678 { 00:16:10.678 "name": "BaseBdev4", 00:16:10.679 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:10.679 "is_configured": true, 00:16:10.679 "data_offset": 2048, 00:16:10.679 "data_size": 63488 00:16:10.679 } 00:16:10.679 ] 00:16:10.679 }' 00:16:10.679 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.679 15:25:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.938 15:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:10.938 15:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.938 15:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.938 [2024-11-10 15:25:17.176003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:10.938 [2024-11-10 15:25:17.183251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ae60 00:16:10.938 15:25:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.938 15:25:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:10.938 [2024-11-10 15:25:17.185877] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.878 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.138 "name": "raid_bdev1", 00:16:12.138 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:12.138 "strip_size_kb": 64, 00:16:12.138 "state": "online", 00:16:12.138 "raid_level": "raid5f", 00:16:12.138 "superblock": true, 00:16:12.138 "num_base_bdevs": 4, 00:16:12.138 "num_base_bdevs_discovered": 4, 00:16:12.138 "num_base_bdevs_operational": 4, 00:16:12.138 "process": { 00:16:12.138 "type": "rebuild", 00:16:12.138 "target": "spare", 00:16:12.138 "progress": { 00:16:12.138 "blocks": 19200, 00:16:12.138 "percent": 10 00:16:12.138 } 00:16:12.138 }, 00:16:12.138 "base_bdevs_list": [ 00:16:12.138 { 00:16:12.138 "name": "spare", 00:16:12.138 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:12.138 "is_configured": true, 00:16:12.138 "data_offset": 2048, 00:16:12.138 "data_size": 63488 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "name": "BaseBdev2", 00:16:12.138 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:12.138 "is_configured": true, 00:16:12.138 "data_offset": 2048, 00:16:12.138 "data_size": 63488 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "name": "BaseBdev3", 00:16:12.138 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:12.138 "is_configured": true, 00:16:12.138 "data_offset": 2048, 00:16:12.138 "data_size": 63488 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "name": "BaseBdev4", 00:16:12.138 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:12.138 "is_configured": true, 00:16:12.138 "data_offset": 2048, 00:16:12.138 "data_size": 63488 00:16:12.138 } 00:16:12.138 ] 00:16:12.138 }' 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.138 [2024-11-10 15:25:18.347824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.138 [2024-11-10 15:25:18.394415] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:12.138 [2024-11-10 15:25:18.394482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:12.138 [2024-11-10 15:25:18.394516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:12.138 [2024-11-10 15:25:18.394530] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.138 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.138 "name": "raid_bdev1", 00:16:12.138 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:12.138 "strip_size_kb": 64, 00:16:12.138 "state": "online", 00:16:12.138 "raid_level": "raid5f", 00:16:12.138 "superblock": true, 00:16:12.138 "num_base_bdevs": 4, 00:16:12.138 "num_base_bdevs_discovered": 3, 00:16:12.138 "num_base_bdevs_operational": 3, 00:16:12.138 "base_bdevs_list": [ 00:16:12.138 { 00:16:12.138 "name": null, 00:16:12.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.138 "is_configured": false, 00:16:12.138 "data_offset": 0, 00:16:12.138 "data_size": 63488 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "name": "BaseBdev2", 00:16:12.138 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:12.138 "is_configured": true, 00:16:12.138 "data_offset": 2048, 00:16:12.138 "data_size": 63488 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "name": "BaseBdev3", 00:16:12.138 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:12.138 "is_configured": true, 00:16:12.138 "data_offset": 2048, 00:16:12.138 "data_size": 63488 00:16:12.138 }, 00:16:12.138 { 00:16:12.138 "name": "BaseBdev4", 00:16:12.138 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:12.139 "is_configured": true, 00:16:12.139 "data_offset": 2048, 00:16:12.139 "data_size": 63488 00:16:12.139 } 00:16:12.139 ] 00:16:12.139 }' 00:16:12.139 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.139 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.707 "name": "raid_bdev1", 00:16:12.707 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:12.707 "strip_size_kb": 64, 00:16:12.707 "state": "online", 00:16:12.707 "raid_level": "raid5f", 00:16:12.707 "superblock": true, 00:16:12.707 "num_base_bdevs": 4, 00:16:12.707 "num_base_bdevs_discovered": 3, 00:16:12.707 "num_base_bdevs_operational": 3, 00:16:12.707 "base_bdevs_list": [ 00:16:12.707 { 00:16:12.707 "name": null, 00:16:12.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.707 "is_configured": false, 00:16:12.707 "data_offset": 0, 00:16:12.707 "data_size": 63488 00:16:12.707 }, 00:16:12.707 { 00:16:12.707 "name": "BaseBdev2", 00:16:12.707 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:12.707 "is_configured": true, 00:16:12.707 "data_offset": 2048, 00:16:12.707 "data_size": 63488 00:16:12.707 }, 00:16:12.707 { 00:16:12.707 "name": "BaseBdev3", 00:16:12.707 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:12.707 "is_configured": true, 00:16:12.707 "data_offset": 2048, 00:16:12.707 "data_size": 63488 00:16:12.707 }, 00:16:12.707 { 00:16:12.707 "name": "BaseBdev4", 00:16:12.707 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:12.707 "is_configured": true, 00:16:12.707 "data_offset": 2048, 00:16:12.707 "data_size": 63488 00:16:12.707 } 00:16:12.707 ] 00:16:12.707 }' 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.707 [2024-11-10 15:25:18.983811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.707 [2024-11-10 15:25:18.989391] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002af30 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.707 15:25:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:12.707 [2024-11-10 15:25:18.991974] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.644 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.644 15:25:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.644 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.904 "name": "raid_bdev1", 00:16:13.904 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:13.904 "strip_size_kb": 64, 00:16:13.904 "state": "online", 00:16:13.904 "raid_level": "raid5f", 00:16:13.904 "superblock": true, 00:16:13.904 "num_base_bdevs": 4, 00:16:13.904 "num_base_bdevs_discovered": 4, 00:16:13.904 "num_base_bdevs_operational": 4, 00:16:13.904 "process": { 00:16:13.904 "type": "rebuild", 00:16:13.904 "target": "spare", 00:16:13.904 "progress": { 00:16:13.904 "blocks": 19200, 00:16:13.904 "percent": 10 00:16:13.904 } 00:16:13.904 }, 00:16:13.904 "base_bdevs_list": [ 00:16:13.904 { 00:16:13.904 "name": "spare", 00:16:13.904 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:13.904 "is_configured": true, 00:16:13.904 "data_offset": 2048, 00:16:13.904 "data_size": 63488 00:16:13.904 }, 00:16:13.904 { 00:16:13.904 "name": "BaseBdev2", 00:16:13.904 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:13.904 "is_configured": true, 00:16:13.904 "data_offset": 2048, 00:16:13.904 "data_size": 63488 00:16:13.904 }, 00:16:13.904 { 00:16:13.904 "name": "BaseBdev3", 00:16:13.904 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:13.904 "is_configured": true, 00:16:13.904 "data_offset": 2048, 00:16:13.904 "data_size": 63488 00:16:13.904 }, 00:16:13.904 { 00:16:13.904 "name": "BaseBdev4", 00:16:13.904 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:13.904 "is_configured": true, 00:16:13.904 "data_offset": 2048, 00:16:13.904 "data_size": 63488 00:16:13.904 } 00:16:13.904 ] 00:16:13.904 }' 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:13.904 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=534 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.904 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.918 "name": "raid_bdev1", 00:16:13.918 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:13.918 "strip_size_kb": 64, 00:16:13.918 "state": "online", 00:16:13.918 "raid_level": "raid5f", 00:16:13.918 "superblock": true, 00:16:13.918 "num_base_bdevs": 4, 00:16:13.918 "num_base_bdevs_discovered": 4, 00:16:13.918 "num_base_bdevs_operational": 4, 00:16:13.918 "process": { 00:16:13.918 "type": "rebuild", 00:16:13.918 "target": "spare", 00:16:13.918 "progress": { 00:16:13.918 "blocks": 21120, 00:16:13.918 "percent": 11 00:16:13.918 } 00:16:13.918 }, 00:16:13.918 "base_bdevs_list": [ 00:16:13.918 { 00:16:13.918 "name": "spare", 00:16:13.918 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:13.918 "is_configured": true, 00:16:13.918 "data_offset": 2048, 00:16:13.918 "data_size": 63488 00:16:13.918 }, 00:16:13.918 { 00:16:13.918 "name": "BaseBdev2", 00:16:13.918 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:13.918 "is_configured": true, 00:16:13.918 "data_offset": 2048, 00:16:13.918 "data_size": 63488 00:16:13.918 }, 00:16:13.918 { 00:16:13.918 "name": "BaseBdev3", 00:16:13.918 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:13.918 "is_configured": true, 00:16:13.918 "data_offset": 2048, 00:16:13.918 "data_size": 63488 00:16:13.918 }, 00:16:13.918 { 00:16:13.918 "name": "BaseBdev4", 00:16:13.918 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:13.918 "is_configured": true, 00:16:13.918 "data_offset": 2048, 00:16:13.918 "data_size": 63488 00:16:13.918 } 00:16:13.918 ] 00:16:13.918 }' 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.918 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.178 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.178 15:25:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.118 "name": "raid_bdev1", 00:16:15.118 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:15.118 "strip_size_kb": 64, 00:16:15.118 "state": "online", 00:16:15.118 "raid_level": "raid5f", 00:16:15.118 "superblock": true, 00:16:15.118 "num_base_bdevs": 4, 00:16:15.118 "num_base_bdevs_discovered": 4, 00:16:15.118 "num_base_bdevs_operational": 4, 00:16:15.118 "process": { 00:16:15.118 "type": "rebuild", 00:16:15.118 "target": "spare", 00:16:15.118 "progress": { 00:16:15.118 "blocks": 42240, 00:16:15.118 "percent": 22 00:16:15.118 } 00:16:15.118 }, 00:16:15.118 "base_bdevs_list": [ 00:16:15.118 { 00:16:15.118 "name": "spare", 00:16:15.118 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:15.118 "is_configured": true, 00:16:15.118 "data_offset": 2048, 00:16:15.118 "data_size": 63488 00:16:15.118 }, 00:16:15.118 { 00:16:15.118 "name": "BaseBdev2", 00:16:15.118 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:15.118 "is_configured": true, 00:16:15.118 "data_offset": 2048, 00:16:15.118 "data_size": 63488 00:16:15.118 }, 00:16:15.118 { 00:16:15.118 "name": "BaseBdev3", 00:16:15.118 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:15.118 "is_configured": true, 00:16:15.118 "data_offset": 2048, 00:16:15.118 "data_size": 63488 00:16:15.118 }, 00:16:15.118 { 00:16:15.118 "name": "BaseBdev4", 00:16:15.118 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:15.118 "is_configured": true, 00:16:15.118 "data_offset": 2048, 00:16:15.118 "data_size": 63488 00:16:15.118 } 00:16:15.118 ] 00:16:15.118 }' 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.118 15:25:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.500 "name": "raid_bdev1", 00:16:16.500 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:16.500 "strip_size_kb": 64, 00:16:16.500 "state": "online", 00:16:16.500 "raid_level": "raid5f", 00:16:16.500 "superblock": true, 00:16:16.500 "num_base_bdevs": 4, 00:16:16.500 "num_base_bdevs_discovered": 4, 00:16:16.500 "num_base_bdevs_operational": 4, 00:16:16.500 "process": { 00:16:16.500 "type": "rebuild", 00:16:16.500 "target": "spare", 00:16:16.500 "progress": { 00:16:16.500 "blocks": 65280, 00:16:16.500 "percent": 34 00:16:16.500 } 00:16:16.500 }, 00:16:16.500 "base_bdevs_list": [ 00:16:16.500 { 00:16:16.500 "name": "spare", 00:16:16.500 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:16.500 "is_configured": true, 00:16:16.500 "data_offset": 2048, 00:16:16.500 "data_size": 63488 00:16:16.500 }, 00:16:16.500 { 00:16:16.500 "name": "BaseBdev2", 00:16:16.500 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:16.500 "is_configured": true, 00:16:16.500 "data_offset": 2048, 00:16:16.500 "data_size": 63488 00:16:16.500 }, 00:16:16.500 { 00:16:16.500 "name": "BaseBdev3", 00:16:16.500 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:16.500 "is_configured": true, 00:16:16.500 "data_offset": 2048, 00:16:16.500 "data_size": 63488 00:16:16.500 }, 00:16:16.500 { 00:16:16.500 "name": "BaseBdev4", 00:16:16.500 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:16.500 "is_configured": true, 00:16:16.500 "data_offset": 2048, 00:16:16.500 "data_size": 63488 00:16:16.500 } 00:16:16.500 ] 00:16:16.500 }' 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.500 15:25:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.440 "name": "raid_bdev1", 00:16:17.440 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:17.440 "strip_size_kb": 64, 00:16:17.440 "state": "online", 00:16:17.440 "raid_level": "raid5f", 00:16:17.440 "superblock": true, 00:16:17.440 "num_base_bdevs": 4, 00:16:17.440 "num_base_bdevs_discovered": 4, 00:16:17.440 "num_base_bdevs_operational": 4, 00:16:17.440 "process": { 00:16:17.440 "type": "rebuild", 00:16:17.440 "target": "spare", 00:16:17.440 "progress": { 00:16:17.440 "blocks": 86400, 00:16:17.440 "percent": 45 00:16:17.440 } 00:16:17.440 }, 00:16:17.440 "base_bdevs_list": [ 00:16:17.440 { 00:16:17.440 "name": "spare", 00:16:17.440 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:17.440 "is_configured": true, 00:16:17.440 "data_offset": 2048, 00:16:17.440 "data_size": 63488 00:16:17.440 }, 00:16:17.440 { 00:16:17.440 "name": "BaseBdev2", 00:16:17.440 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:17.440 "is_configured": true, 00:16:17.440 "data_offset": 2048, 00:16:17.440 "data_size": 63488 00:16:17.440 }, 00:16:17.440 { 00:16:17.440 "name": "BaseBdev3", 00:16:17.440 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:17.440 "is_configured": true, 00:16:17.440 "data_offset": 2048, 00:16:17.440 "data_size": 63488 00:16:17.440 }, 00:16:17.440 { 00:16:17.440 "name": "BaseBdev4", 00:16:17.440 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:17.440 "is_configured": true, 00:16:17.440 "data_offset": 2048, 00:16:17.440 "data_size": 63488 00:16:17.440 } 00:16:17.440 ] 00:16:17.440 }' 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.440 15:25:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.381 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.381 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.381 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.381 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.648 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.648 "name": "raid_bdev1", 00:16:18.648 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:18.648 "strip_size_kb": 64, 00:16:18.648 "state": "online", 00:16:18.648 "raid_level": "raid5f", 00:16:18.648 "superblock": true, 00:16:18.648 "num_base_bdevs": 4, 00:16:18.648 "num_base_bdevs_discovered": 4, 00:16:18.648 "num_base_bdevs_operational": 4, 00:16:18.648 "process": { 00:16:18.648 "type": "rebuild", 00:16:18.648 "target": "spare", 00:16:18.648 "progress": { 00:16:18.648 "blocks": 109440, 00:16:18.648 "percent": 57 00:16:18.649 } 00:16:18.649 }, 00:16:18.649 "base_bdevs_list": [ 00:16:18.649 { 00:16:18.649 "name": "spare", 00:16:18.649 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:18.649 "is_configured": true, 00:16:18.649 "data_offset": 2048, 00:16:18.649 "data_size": 63488 00:16:18.649 }, 00:16:18.649 { 00:16:18.649 "name": "BaseBdev2", 00:16:18.649 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:18.649 "is_configured": true, 00:16:18.649 "data_offset": 2048, 00:16:18.649 "data_size": 63488 00:16:18.649 }, 00:16:18.649 { 00:16:18.649 "name": "BaseBdev3", 00:16:18.649 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:18.649 "is_configured": true, 00:16:18.649 "data_offset": 2048, 00:16:18.649 "data_size": 63488 00:16:18.649 }, 00:16:18.649 { 00:16:18.649 "name": "BaseBdev4", 00:16:18.649 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:18.649 "is_configured": true, 00:16:18.649 "data_offset": 2048, 00:16:18.649 "data_size": 63488 00:16:18.649 } 00:16:18.649 ] 00:16:18.649 }' 00:16:18.649 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.649 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.649 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.649 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.649 15:25:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.629 "name": "raid_bdev1", 00:16:19.629 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:19.629 "strip_size_kb": 64, 00:16:19.629 "state": "online", 00:16:19.629 "raid_level": "raid5f", 00:16:19.629 "superblock": true, 00:16:19.629 "num_base_bdevs": 4, 00:16:19.629 "num_base_bdevs_discovered": 4, 00:16:19.629 "num_base_bdevs_operational": 4, 00:16:19.629 "process": { 00:16:19.629 "type": "rebuild", 00:16:19.629 "target": "spare", 00:16:19.629 "progress": { 00:16:19.629 "blocks": 130560, 00:16:19.629 "percent": 68 00:16:19.629 } 00:16:19.629 }, 00:16:19.629 "base_bdevs_list": [ 00:16:19.629 { 00:16:19.629 "name": "spare", 00:16:19.629 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:19.629 "is_configured": true, 00:16:19.629 "data_offset": 2048, 00:16:19.629 "data_size": 63488 00:16:19.629 }, 00:16:19.629 { 00:16:19.629 "name": "BaseBdev2", 00:16:19.629 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:19.629 "is_configured": true, 00:16:19.629 "data_offset": 2048, 00:16:19.629 "data_size": 63488 00:16:19.629 }, 00:16:19.629 { 00:16:19.629 "name": "BaseBdev3", 00:16:19.629 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:19.629 "is_configured": true, 00:16:19.629 "data_offset": 2048, 00:16:19.629 "data_size": 63488 00:16:19.629 }, 00:16:19.629 { 00:16:19.629 "name": "BaseBdev4", 00:16:19.629 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:19.629 "is_configured": true, 00:16:19.629 "data_offset": 2048, 00:16:19.629 "data_size": 63488 00:16:19.629 } 00:16:19.629 ] 00:16:19.629 }' 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.629 15:25:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.888 15:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.888 15:25:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.827 "name": "raid_bdev1", 00:16:20.827 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:20.827 "strip_size_kb": 64, 00:16:20.827 "state": "online", 00:16:20.827 "raid_level": "raid5f", 00:16:20.827 "superblock": true, 00:16:20.827 "num_base_bdevs": 4, 00:16:20.827 "num_base_bdevs_discovered": 4, 00:16:20.827 "num_base_bdevs_operational": 4, 00:16:20.827 "process": { 00:16:20.827 "type": "rebuild", 00:16:20.827 "target": "spare", 00:16:20.827 "progress": { 00:16:20.827 "blocks": 153600, 00:16:20.827 "percent": 80 00:16:20.827 } 00:16:20.827 }, 00:16:20.827 "base_bdevs_list": [ 00:16:20.827 { 00:16:20.827 "name": "spare", 00:16:20.827 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:20.827 "is_configured": true, 00:16:20.827 "data_offset": 2048, 00:16:20.827 "data_size": 63488 00:16:20.827 }, 00:16:20.827 { 00:16:20.827 "name": "BaseBdev2", 00:16:20.827 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:20.827 "is_configured": true, 00:16:20.827 "data_offset": 2048, 00:16:20.827 "data_size": 63488 00:16:20.827 }, 00:16:20.827 { 00:16:20.827 "name": "BaseBdev3", 00:16:20.827 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:20.827 "is_configured": true, 00:16:20.827 "data_offset": 2048, 00:16:20.827 "data_size": 63488 00:16:20.827 }, 00:16:20.827 { 00:16:20.827 "name": "BaseBdev4", 00:16:20.827 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:20.827 "is_configured": true, 00:16:20.827 "data_offset": 2048, 00:16:20.827 "data_size": 63488 00:16:20.827 } 00:16:20.827 ] 00:16:20.827 }' 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.827 15:25:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.207 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.208 "name": "raid_bdev1", 00:16:22.208 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:22.208 "strip_size_kb": 64, 00:16:22.208 "state": "online", 00:16:22.208 "raid_level": "raid5f", 00:16:22.208 "superblock": true, 00:16:22.208 "num_base_bdevs": 4, 00:16:22.208 "num_base_bdevs_discovered": 4, 00:16:22.208 "num_base_bdevs_operational": 4, 00:16:22.208 "process": { 00:16:22.208 "type": "rebuild", 00:16:22.208 "target": "spare", 00:16:22.208 "progress": { 00:16:22.208 "blocks": 174720, 00:16:22.208 "percent": 91 00:16:22.208 } 00:16:22.208 }, 00:16:22.208 "base_bdevs_list": [ 00:16:22.208 { 00:16:22.208 "name": "spare", 00:16:22.208 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:22.208 "is_configured": true, 00:16:22.208 "data_offset": 2048, 00:16:22.208 "data_size": 63488 00:16:22.208 }, 00:16:22.208 { 00:16:22.208 "name": "BaseBdev2", 00:16:22.208 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:22.208 "is_configured": true, 00:16:22.208 "data_offset": 2048, 00:16:22.208 "data_size": 63488 00:16:22.208 }, 00:16:22.208 { 00:16:22.208 "name": "BaseBdev3", 00:16:22.208 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:22.208 "is_configured": true, 00:16:22.208 "data_offset": 2048, 00:16:22.208 "data_size": 63488 00:16:22.208 }, 00:16:22.208 { 00:16:22.208 "name": "BaseBdev4", 00:16:22.208 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:22.208 "is_configured": true, 00:16:22.208 "data_offset": 2048, 00:16:22.208 "data_size": 63488 00:16:22.208 } 00:16:22.208 ] 00:16:22.208 }' 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.208 15:25:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:22.777 [2024-11-10 15:25:29.056716] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:22.777 [2024-11-10 15:25:29.056787] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:22.777 [2024-11-10 15:25:29.056914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.036 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.036 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.036 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.036 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.036 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.037 "name": "raid_bdev1", 00:16:23.037 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:23.037 "strip_size_kb": 64, 00:16:23.037 "state": "online", 00:16:23.037 "raid_level": "raid5f", 00:16:23.037 "superblock": true, 00:16:23.037 "num_base_bdevs": 4, 00:16:23.037 "num_base_bdevs_discovered": 4, 00:16:23.037 "num_base_bdevs_operational": 4, 00:16:23.037 "base_bdevs_list": [ 00:16:23.037 { 00:16:23.037 "name": "spare", 00:16:23.037 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:23.037 "is_configured": true, 00:16:23.037 "data_offset": 2048, 00:16:23.037 "data_size": 63488 00:16:23.037 }, 00:16:23.037 { 00:16:23.037 "name": "BaseBdev2", 00:16:23.037 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:23.037 "is_configured": true, 00:16:23.037 "data_offset": 2048, 00:16:23.037 "data_size": 63488 00:16:23.037 }, 00:16:23.037 { 00:16:23.037 "name": "BaseBdev3", 00:16:23.037 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:23.037 "is_configured": true, 00:16:23.037 "data_offset": 2048, 00:16:23.037 "data_size": 63488 00:16:23.037 }, 00:16:23.037 { 00:16:23.037 "name": "BaseBdev4", 00:16:23.037 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:23.037 "is_configured": true, 00:16:23.037 "data_offset": 2048, 00:16:23.037 "data_size": 63488 00:16:23.037 } 00:16:23.037 ] 00:16:23.037 }' 00:16:23.037 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.297 "name": "raid_bdev1", 00:16:23.297 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:23.297 "strip_size_kb": 64, 00:16:23.297 "state": "online", 00:16:23.297 "raid_level": "raid5f", 00:16:23.297 "superblock": true, 00:16:23.297 "num_base_bdevs": 4, 00:16:23.297 "num_base_bdevs_discovered": 4, 00:16:23.297 "num_base_bdevs_operational": 4, 00:16:23.297 "base_bdevs_list": [ 00:16:23.297 { 00:16:23.297 "name": "spare", 00:16:23.297 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:23.297 "is_configured": true, 00:16:23.297 "data_offset": 2048, 00:16:23.297 "data_size": 63488 00:16:23.297 }, 00:16:23.297 { 00:16:23.297 "name": "BaseBdev2", 00:16:23.297 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:23.297 "is_configured": true, 00:16:23.297 "data_offset": 2048, 00:16:23.297 "data_size": 63488 00:16:23.297 }, 00:16:23.297 { 00:16:23.297 "name": "BaseBdev3", 00:16:23.297 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:23.297 "is_configured": true, 00:16:23.297 "data_offset": 2048, 00:16:23.297 "data_size": 63488 00:16:23.297 }, 00:16:23.297 { 00:16:23.297 "name": "BaseBdev4", 00:16:23.297 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:23.297 "is_configured": true, 00:16:23.297 "data_offset": 2048, 00:16:23.297 "data_size": 63488 00:16:23.297 } 00:16:23.297 ] 00:16:23.297 }' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.297 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.557 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.557 "name": "raid_bdev1", 00:16:23.557 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:23.557 "strip_size_kb": 64, 00:16:23.557 "state": "online", 00:16:23.557 "raid_level": "raid5f", 00:16:23.557 "superblock": true, 00:16:23.557 "num_base_bdevs": 4, 00:16:23.557 "num_base_bdevs_discovered": 4, 00:16:23.557 "num_base_bdevs_operational": 4, 00:16:23.557 "base_bdevs_list": [ 00:16:23.557 { 00:16:23.557 "name": "spare", 00:16:23.557 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:23.557 "is_configured": true, 00:16:23.557 "data_offset": 2048, 00:16:23.557 "data_size": 63488 00:16:23.557 }, 00:16:23.557 { 00:16:23.557 "name": "BaseBdev2", 00:16:23.557 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:23.557 "is_configured": true, 00:16:23.557 "data_offset": 2048, 00:16:23.557 "data_size": 63488 00:16:23.557 }, 00:16:23.557 { 00:16:23.557 "name": "BaseBdev3", 00:16:23.557 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:23.557 "is_configured": true, 00:16:23.557 "data_offset": 2048, 00:16:23.557 "data_size": 63488 00:16:23.557 }, 00:16:23.557 { 00:16:23.557 "name": "BaseBdev4", 00:16:23.557 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:23.557 "is_configured": true, 00:16:23.557 "data_offset": 2048, 00:16:23.557 "data_size": 63488 00:16:23.557 } 00:16:23.557 ] 00:16:23.557 }' 00:16:23.557 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.557 15:25:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.816 [2024-11-10 15:25:30.074748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:23.816 [2024-11-10 15:25:30.074789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:23.816 [2024-11-10 15:25:30.074876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:23.816 [2024-11-10 15:25:30.074976] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:23.816 [2024-11-10 15:25:30.074988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:23.816 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:24.076 /dev/nbd0 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.076 1+0 records in 00:16:24.076 1+0 records out 00:16:24.076 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366197 s, 11.2 MB/s 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.076 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:24.336 /dev/nbd1 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # local i 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # break 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:24.336 1+0 records in 00:16:24.336 1+0 records out 00:16:24.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416743 s, 9.8 MB/s 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # size=4096 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # return 0 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:24.336 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.596 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:24.856 15:25:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.856 [2024-11-10 15:25:31.200688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:24.856 [2024-11-10 15:25:31.200816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.856 [2024-11-10 15:25:31.200858] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:24.856 [2024-11-10 15:25:31.200891] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.856 [2024-11-10 15:25:31.203266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.856 [2024-11-10 15:25:31.203352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:24.856 [2024-11-10 15:25:31.203481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:24.856 [2024-11-10 15:25:31.203545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:24.856 [2024-11-10 15:25:31.203711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:24.856 [2024-11-10 15:25:31.203837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:24.856 [2024-11-10 15:25:31.203954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:24.856 spare 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.856 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.117 [2024-11-10 15:25:31.304220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:25.117 [2024-11-10 15:25:31.304247] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.117 [2024-11-10 15:25:31.304531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000495e0 00:16:25.117 [2024-11-10 15:25:31.305022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:25.117 [2024-11-10 15:25:31.305058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:25.117 [2024-11-10 15:25:31.305209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.117 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.117 "name": "raid_bdev1", 00:16:25.117 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:25.117 "strip_size_kb": 64, 00:16:25.117 "state": "online", 00:16:25.117 "raid_level": "raid5f", 00:16:25.117 "superblock": true, 00:16:25.117 "num_base_bdevs": 4, 00:16:25.117 "num_base_bdevs_discovered": 4, 00:16:25.117 "num_base_bdevs_operational": 4, 00:16:25.117 "base_bdevs_list": [ 00:16:25.117 { 00:16:25.117 "name": "spare", 00:16:25.117 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:25.117 "is_configured": true, 00:16:25.117 "data_offset": 2048, 00:16:25.117 "data_size": 63488 00:16:25.117 }, 00:16:25.117 { 00:16:25.117 "name": "BaseBdev2", 00:16:25.117 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:25.117 "is_configured": true, 00:16:25.117 "data_offset": 2048, 00:16:25.117 "data_size": 63488 00:16:25.117 }, 00:16:25.117 { 00:16:25.117 "name": "BaseBdev3", 00:16:25.117 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:25.117 "is_configured": true, 00:16:25.117 "data_offset": 2048, 00:16:25.118 "data_size": 63488 00:16:25.118 }, 00:16:25.118 { 00:16:25.118 "name": "BaseBdev4", 00:16:25.118 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:25.118 "is_configured": true, 00:16:25.118 "data_offset": 2048, 00:16:25.118 "data_size": 63488 00:16:25.118 } 00:16:25.118 ] 00:16:25.118 }' 00:16:25.118 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.118 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.688 "name": "raid_bdev1", 00:16:25.688 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:25.688 "strip_size_kb": 64, 00:16:25.688 "state": "online", 00:16:25.688 "raid_level": "raid5f", 00:16:25.688 "superblock": true, 00:16:25.688 "num_base_bdevs": 4, 00:16:25.688 "num_base_bdevs_discovered": 4, 00:16:25.688 "num_base_bdevs_operational": 4, 00:16:25.688 "base_bdevs_list": [ 00:16:25.688 { 00:16:25.688 "name": "spare", 00:16:25.688 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 }, 00:16:25.688 { 00:16:25.688 "name": "BaseBdev2", 00:16:25.688 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 }, 00:16:25.688 { 00:16:25.688 "name": "BaseBdev3", 00:16:25.688 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 }, 00:16:25.688 { 00:16:25.688 "name": "BaseBdev4", 00:16:25.688 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 } 00:16:25.688 ] 00:16:25.688 }' 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.688 [2024-11-10 15:25:31.961372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.688 15:25:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.688 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.688 "name": "raid_bdev1", 00:16:25.688 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:25.688 "strip_size_kb": 64, 00:16:25.688 "state": "online", 00:16:25.688 "raid_level": "raid5f", 00:16:25.688 "superblock": true, 00:16:25.688 "num_base_bdevs": 4, 00:16:25.688 "num_base_bdevs_discovered": 3, 00:16:25.688 "num_base_bdevs_operational": 3, 00:16:25.688 "base_bdevs_list": [ 00:16:25.688 { 00:16:25.688 "name": null, 00:16:25.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.688 "is_configured": false, 00:16:25.688 "data_offset": 0, 00:16:25.688 "data_size": 63488 00:16:25.688 }, 00:16:25.688 { 00:16:25.688 "name": "BaseBdev2", 00:16:25.688 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 }, 00:16:25.688 { 00:16:25.688 "name": "BaseBdev3", 00:16:25.688 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 }, 00:16:25.688 { 00:16:25.688 "name": "BaseBdev4", 00:16:25.688 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:25.688 "is_configured": true, 00:16:25.688 "data_offset": 2048, 00:16:25.688 "data_size": 63488 00:16:25.688 } 00:16:25.688 ] 00:16:25.688 }' 00:16:25.688 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.688 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.258 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:26.258 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.258 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.258 [2024-11-10 15:25:32.429506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.258 [2024-11-10 15:25:32.429748] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.258 [2024-11-10 15:25:32.429817] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:26.258 [2024-11-10 15:25:32.429881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:26.258 [2024-11-10 15:25:32.436941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000496b0 00:16:26.258 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.258 15:25:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:26.258 [2024-11-10 15:25:32.439473] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.197 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.198 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.198 "name": "raid_bdev1", 00:16:27.198 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:27.198 "strip_size_kb": 64, 00:16:27.198 "state": "online", 00:16:27.198 "raid_level": "raid5f", 00:16:27.198 "superblock": true, 00:16:27.198 "num_base_bdevs": 4, 00:16:27.198 "num_base_bdevs_discovered": 4, 00:16:27.198 "num_base_bdevs_operational": 4, 00:16:27.198 "process": { 00:16:27.198 "type": "rebuild", 00:16:27.198 "target": "spare", 00:16:27.198 "progress": { 00:16:27.198 "blocks": 19200, 00:16:27.198 "percent": 10 00:16:27.198 } 00:16:27.198 }, 00:16:27.198 "base_bdevs_list": [ 00:16:27.198 { 00:16:27.198 "name": "spare", 00:16:27.198 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:27.198 "is_configured": true, 00:16:27.198 "data_offset": 2048, 00:16:27.198 "data_size": 63488 00:16:27.198 }, 00:16:27.198 { 00:16:27.198 "name": "BaseBdev2", 00:16:27.198 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:27.198 "is_configured": true, 00:16:27.198 "data_offset": 2048, 00:16:27.198 "data_size": 63488 00:16:27.198 }, 00:16:27.198 { 00:16:27.198 "name": "BaseBdev3", 00:16:27.198 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:27.198 "is_configured": true, 00:16:27.198 "data_offset": 2048, 00:16:27.198 "data_size": 63488 00:16:27.198 }, 00:16:27.198 { 00:16:27.198 "name": "BaseBdev4", 00:16:27.198 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:27.198 "is_configured": true, 00:16:27.198 "data_offset": 2048, 00:16:27.198 "data_size": 63488 00:16:27.198 } 00:16:27.198 ] 00:16:27.198 }' 00:16:27.198 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.198 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.198 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.458 [2024-11-10 15:25:33.585438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.458 [2024-11-10 15:25:33.647876] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:27.458 [2024-11-10 15:25:33.647935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.458 [2024-11-10 15:25:33.647950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:27.458 [2024-11-10 15:25:33.647961] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.458 "name": "raid_bdev1", 00:16:27.458 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:27.458 "strip_size_kb": 64, 00:16:27.458 "state": "online", 00:16:27.458 "raid_level": "raid5f", 00:16:27.458 "superblock": true, 00:16:27.458 "num_base_bdevs": 4, 00:16:27.458 "num_base_bdevs_discovered": 3, 00:16:27.458 "num_base_bdevs_operational": 3, 00:16:27.458 "base_bdevs_list": [ 00:16:27.458 { 00:16:27.458 "name": null, 00:16:27.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.458 "is_configured": false, 00:16:27.458 "data_offset": 0, 00:16:27.458 "data_size": 63488 00:16:27.458 }, 00:16:27.458 { 00:16:27.458 "name": "BaseBdev2", 00:16:27.458 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:27.458 "is_configured": true, 00:16:27.458 "data_offset": 2048, 00:16:27.458 "data_size": 63488 00:16:27.458 }, 00:16:27.458 { 00:16:27.458 "name": "BaseBdev3", 00:16:27.458 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:27.458 "is_configured": true, 00:16:27.458 "data_offset": 2048, 00:16:27.458 "data_size": 63488 00:16:27.458 }, 00:16:27.458 { 00:16:27.458 "name": "BaseBdev4", 00:16:27.458 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:27.458 "is_configured": true, 00:16:27.458 "data_offset": 2048, 00:16:27.458 "data_size": 63488 00:16:27.458 } 00:16:27.458 ] 00:16:27.458 }' 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.458 15:25:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.028 15:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:28.028 15:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.028 15:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.028 [2024-11-10 15:25:34.101070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:28.028 [2024-11-10 15:25:34.101182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.028 [2024-11-10 15:25:34.101225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:28.028 [2024-11-10 15:25:34.101255] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.028 [2024-11-10 15:25:34.101758] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.028 [2024-11-10 15:25:34.101828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:28.028 [2024-11-10 15:25:34.101944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:28.028 [2024-11-10 15:25:34.101991] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:28.028 [2024-11-10 15:25:34.102049] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:28.028 [2024-11-10 15:25:34.102139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:28.028 [2024-11-10 15:25:34.107382] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049780 00:16:28.028 spare 00:16:28.028 15:25:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.028 15:25:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:28.029 [2024-11-10 15:25:34.109943] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.969 "name": "raid_bdev1", 00:16:28.969 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:28.969 "strip_size_kb": 64, 00:16:28.969 "state": "online", 00:16:28.969 "raid_level": "raid5f", 00:16:28.969 "superblock": true, 00:16:28.969 "num_base_bdevs": 4, 00:16:28.969 "num_base_bdevs_discovered": 4, 00:16:28.969 "num_base_bdevs_operational": 4, 00:16:28.969 "process": { 00:16:28.969 "type": "rebuild", 00:16:28.969 "target": "spare", 00:16:28.969 "progress": { 00:16:28.969 "blocks": 19200, 00:16:28.969 "percent": 10 00:16:28.969 } 00:16:28.969 }, 00:16:28.969 "base_bdevs_list": [ 00:16:28.969 { 00:16:28.969 "name": "spare", 00:16:28.969 "uuid": "8302efe6-96a3-5bf6-8d64-473d2a60e929", 00:16:28.969 "is_configured": true, 00:16:28.969 "data_offset": 2048, 00:16:28.969 "data_size": 63488 00:16:28.969 }, 00:16:28.969 { 00:16:28.969 "name": "BaseBdev2", 00:16:28.969 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:28.969 "is_configured": true, 00:16:28.969 "data_offset": 2048, 00:16:28.969 "data_size": 63488 00:16:28.969 }, 00:16:28.969 { 00:16:28.969 "name": "BaseBdev3", 00:16:28.969 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:28.969 "is_configured": true, 00:16:28.969 "data_offset": 2048, 00:16:28.969 "data_size": 63488 00:16:28.969 }, 00:16:28.969 { 00:16:28.969 "name": "BaseBdev4", 00:16:28.969 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:28.969 "is_configured": true, 00:16:28.969 "data_offset": 2048, 00:16:28.969 "data_size": 63488 00:16:28.969 } 00:16:28.969 ] 00:16:28.969 }' 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.969 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.969 [2024-11-10 15:25:35.267580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.969 [2024-11-10 15:25:35.318496] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:28.969 [2024-11-10 15:25:35.318547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.969 [2024-11-10 15:25:35.318567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:28.969 [2024-11-10 15:25:35.318575] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.229 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.229 "name": "raid_bdev1", 00:16:29.229 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:29.229 "strip_size_kb": 64, 00:16:29.229 "state": "online", 00:16:29.229 "raid_level": "raid5f", 00:16:29.229 "superblock": true, 00:16:29.229 "num_base_bdevs": 4, 00:16:29.229 "num_base_bdevs_discovered": 3, 00:16:29.229 "num_base_bdevs_operational": 3, 00:16:29.229 "base_bdevs_list": [ 00:16:29.229 { 00:16:29.229 "name": null, 00:16:29.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.229 "is_configured": false, 00:16:29.229 "data_offset": 0, 00:16:29.229 "data_size": 63488 00:16:29.230 }, 00:16:29.230 { 00:16:29.230 "name": "BaseBdev2", 00:16:29.230 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:29.230 "is_configured": true, 00:16:29.230 "data_offset": 2048, 00:16:29.230 "data_size": 63488 00:16:29.230 }, 00:16:29.230 { 00:16:29.230 "name": "BaseBdev3", 00:16:29.230 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:29.230 "is_configured": true, 00:16:29.230 "data_offset": 2048, 00:16:29.230 "data_size": 63488 00:16:29.230 }, 00:16:29.230 { 00:16:29.230 "name": "BaseBdev4", 00:16:29.230 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:29.230 "is_configured": true, 00:16:29.230 "data_offset": 2048, 00:16:29.230 "data_size": 63488 00:16:29.230 } 00:16:29.230 ] 00:16:29.230 }' 00:16:29.230 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.230 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.490 "name": "raid_bdev1", 00:16:29.490 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:29.490 "strip_size_kb": 64, 00:16:29.490 "state": "online", 00:16:29.490 "raid_level": "raid5f", 00:16:29.490 "superblock": true, 00:16:29.490 "num_base_bdevs": 4, 00:16:29.490 "num_base_bdevs_discovered": 3, 00:16:29.490 "num_base_bdevs_operational": 3, 00:16:29.490 "base_bdevs_list": [ 00:16:29.490 { 00:16:29.490 "name": null, 00:16:29.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.490 "is_configured": false, 00:16:29.490 "data_offset": 0, 00:16:29.490 "data_size": 63488 00:16:29.490 }, 00:16:29.490 { 00:16:29.490 "name": "BaseBdev2", 00:16:29.490 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:29.490 "is_configured": true, 00:16:29.490 "data_offset": 2048, 00:16:29.490 "data_size": 63488 00:16:29.490 }, 00:16:29.490 { 00:16:29.490 "name": "BaseBdev3", 00:16:29.490 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:29.490 "is_configured": true, 00:16:29.490 "data_offset": 2048, 00:16:29.490 "data_size": 63488 00:16:29.490 }, 00:16:29.490 { 00:16:29.490 "name": "BaseBdev4", 00:16:29.490 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:29.490 "is_configured": true, 00:16:29.490 "data_offset": 2048, 00:16:29.490 "data_size": 63488 00:16:29.490 } 00:16:29.490 ] 00:16:29.490 }' 00:16:29.490 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.750 [2024-11-10 15:25:35.911538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:29.750 [2024-11-10 15:25:35.911590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.750 [2024-11-10 15:25:35.911612] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:29.750 [2024-11-10 15:25:35.911621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.750 [2024-11-10 15:25:35.912098] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.750 [2024-11-10 15:25:35.912115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:29.750 [2024-11-10 15:25:35.912195] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:29.750 [2024-11-10 15:25:35.912211] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:29.750 [2024-11-10 15:25:35.912223] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:29.750 [2024-11-10 15:25:35.912233] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:29.750 BaseBdev1 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.750 15:25:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.689 "name": "raid_bdev1", 00:16:30.689 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:30.689 "strip_size_kb": 64, 00:16:30.689 "state": "online", 00:16:30.689 "raid_level": "raid5f", 00:16:30.689 "superblock": true, 00:16:30.689 "num_base_bdevs": 4, 00:16:30.689 "num_base_bdevs_discovered": 3, 00:16:30.689 "num_base_bdevs_operational": 3, 00:16:30.689 "base_bdevs_list": [ 00:16:30.689 { 00:16:30.689 "name": null, 00:16:30.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.689 "is_configured": false, 00:16:30.689 "data_offset": 0, 00:16:30.689 "data_size": 63488 00:16:30.689 }, 00:16:30.689 { 00:16:30.689 "name": "BaseBdev2", 00:16:30.689 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:30.689 "is_configured": true, 00:16:30.689 "data_offset": 2048, 00:16:30.689 "data_size": 63488 00:16:30.689 }, 00:16:30.689 { 00:16:30.689 "name": "BaseBdev3", 00:16:30.689 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:30.689 "is_configured": true, 00:16:30.689 "data_offset": 2048, 00:16:30.689 "data_size": 63488 00:16:30.689 }, 00:16:30.689 { 00:16:30.689 "name": "BaseBdev4", 00:16:30.689 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:30.689 "is_configured": true, 00:16:30.689 "data_offset": 2048, 00:16:30.689 "data_size": 63488 00:16:30.689 } 00:16:30.689 ] 00:16:30.689 }' 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.689 15:25:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.258 "name": "raid_bdev1", 00:16:31.258 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:31.258 "strip_size_kb": 64, 00:16:31.258 "state": "online", 00:16:31.258 "raid_level": "raid5f", 00:16:31.258 "superblock": true, 00:16:31.258 "num_base_bdevs": 4, 00:16:31.258 "num_base_bdevs_discovered": 3, 00:16:31.258 "num_base_bdevs_operational": 3, 00:16:31.258 "base_bdevs_list": [ 00:16:31.258 { 00:16:31.258 "name": null, 00:16:31.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.258 "is_configured": false, 00:16:31.258 "data_offset": 0, 00:16:31.258 "data_size": 63488 00:16:31.258 }, 00:16:31.258 { 00:16:31.258 "name": "BaseBdev2", 00:16:31.258 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:31.258 "is_configured": true, 00:16:31.258 "data_offset": 2048, 00:16:31.258 "data_size": 63488 00:16:31.258 }, 00:16:31.258 { 00:16:31.258 "name": "BaseBdev3", 00:16:31.258 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:31.258 "is_configured": true, 00:16:31.258 "data_offset": 2048, 00:16:31.258 "data_size": 63488 00:16:31.258 }, 00:16:31.258 { 00:16:31.258 "name": "BaseBdev4", 00:16:31.258 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:31.258 "is_configured": true, 00:16:31.258 "data_offset": 2048, 00:16:31.258 "data_size": 63488 00:16:31.258 } 00:16:31.258 ] 00:16:31.258 }' 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.258 [2024-11-10 15:25:37.555962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.258 [2024-11-10 15:25:37.556177] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:31.258 [2024-11-10 15:25:37.556241] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:31.258 request: 00:16:31.258 { 00:16:31.258 "base_bdev": "BaseBdev1", 00:16:31.258 "raid_bdev": "raid_bdev1", 00:16:31.258 "method": "bdev_raid_add_base_bdev", 00:16:31.258 "req_id": 1 00:16:31.258 } 00:16:31.258 Got JSON-RPC error response 00:16:31.258 response: 00:16:31.258 { 00:16:31.258 "code": -22, 00:16:31.258 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:31.258 } 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:31.258 15:25:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.639 "name": "raid_bdev1", 00:16:32.639 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:32.639 "strip_size_kb": 64, 00:16:32.639 "state": "online", 00:16:32.639 "raid_level": "raid5f", 00:16:32.639 "superblock": true, 00:16:32.639 "num_base_bdevs": 4, 00:16:32.639 "num_base_bdevs_discovered": 3, 00:16:32.639 "num_base_bdevs_operational": 3, 00:16:32.639 "base_bdevs_list": [ 00:16:32.639 { 00:16:32.639 "name": null, 00:16:32.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.639 "is_configured": false, 00:16:32.639 "data_offset": 0, 00:16:32.639 "data_size": 63488 00:16:32.639 }, 00:16:32.639 { 00:16:32.639 "name": "BaseBdev2", 00:16:32.639 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:32.639 "is_configured": true, 00:16:32.639 "data_offset": 2048, 00:16:32.639 "data_size": 63488 00:16:32.639 }, 00:16:32.639 { 00:16:32.639 "name": "BaseBdev3", 00:16:32.639 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:32.639 "is_configured": true, 00:16:32.639 "data_offset": 2048, 00:16:32.639 "data_size": 63488 00:16:32.639 }, 00:16:32.639 { 00:16:32.639 "name": "BaseBdev4", 00:16:32.639 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:32.639 "is_configured": true, 00:16:32.639 "data_offset": 2048, 00:16:32.639 "data_size": 63488 00:16:32.639 } 00:16:32.639 ] 00:16:32.639 }' 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.639 15:25:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.899 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:32.900 "name": "raid_bdev1", 00:16:32.900 "uuid": "91b56621-63c4-440c-9179-459f97bb029e", 00:16:32.900 "strip_size_kb": 64, 00:16:32.900 "state": "online", 00:16:32.900 "raid_level": "raid5f", 00:16:32.900 "superblock": true, 00:16:32.900 "num_base_bdevs": 4, 00:16:32.900 "num_base_bdevs_discovered": 3, 00:16:32.900 "num_base_bdevs_operational": 3, 00:16:32.900 "base_bdevs_list": [ 00:16:32.900 { 00:16:32.900 "name": null, 00:16:32.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.900 "is_configured": false, 00:16:32.900 "data_offset": 0, 00:16:32.900 "data_size": 63488 00:16:32.900 }, 00:16:32.900 { 00:16:32.900 "name": "BaseBdev2", 00:16:32.900 "uuid": "59f7e3e1-f449-558b-8428-5259e13b928b", 00:16:32.900 "is_configured": true, 00:16:32.900 "data_offset": 2048, 00:16:32.900 "data_size": 63488 00:16:32.900 }, 00:16:32.900 { 00:16:32.900 "name": "BaseBdev3", 00:16:32.900 "uuid": "e5a52af8-86af-5259-a2f7-754b4b15e1c4", 00:16:32.900 "is_configured": true, 00:16:32.900 "data_offset": 2048, 00:16:32.900 "data_size": 63488 00:16:32.900 }, 00:16:32.900 { 00:16:32.900 "name": "BaseBdev4", 00:16:32.900 "uuid": "04426ddd-ac33-5d34-9dd8-d7d0673b5921", 00:16:32.900 "is_configured": true, 00:16:32.900 "data_offset": 2048, 00:16:32.900 "data_size": 63488 00:16:32.900 } 00:16:32.900 ] 00:16:32.900 }' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 96917 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@952 -- # '[' -z 96917 ']' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # kill -0 96917 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # uname 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 96917 00:16:32.900 killing process with pid 96917 00:16:32.900 Received shutdown signal, test time was about 60.000000 seconds 00:16:32.900 00:16:32.900 Latency(us) 00:16:32.900 [2024-11-10T15:25:39.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.900 [2024-11-10T15:25:39.263Z] =================================================================================================================== 00:16:32.900 [2024-11-10T15:25:39.263Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@970 -- # echo 'killing process with pid 96917' 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@971 -- # kill 96917 00:16:32.900 [2024-11-10 15:25:39.195551] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.900 [2024-11-10 15:25:39.195662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.900 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@976 -- # wait 96917 00:16:32.900 [2024-11-10 15:25:39.195736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.900 [2024-11-10 15:25:39.195751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:33.160 [2024-11-10 15:25:39.288374] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.420 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:33.420 00:16:33.420 real 0m25.514s 00:16:33.420 user 0m32.229s 00:16:33.420 sys 0m3.240s 00:16:33.420 ************************************ 00:16:33.420 END TEST raid5f_rebuild_test_sb 00:16:33.420 ************************************ 00:16:33.420 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:33.420 15:25:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.420 15:25:39 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:33.420 15:25:39 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:33.420 15:25:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:33.420 15:25:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:33.420 15:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.420 ************************************ 00:16:33.420 START TEST raid_state_function_test_sb_4k 00:16:33.420 ************************************ 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=97709 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97709' 00:16:33.420 Process raid pid: 97709 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 97709 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 97709 ']' 00:16:33.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:33.420 15:25:39 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:33.680 [2024-11-10 15:25:39.793380] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:16:33.680 [2024-11-10 15:25:39.793507] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.680 [2024-11-10 15:25:39.933564] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:33.680 [2024-11-10 15:25:39.972811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.680 [2024-11-10 15:25:40.014471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.939 [2024-11-10 15:25:40.093503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.939 [2024-11-10 15:25:40.093540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.507 [2024-11-10 15:25:40.631205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.507 [2024-11-10 15:25:40.631260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.507 [2024-11-10 15:25:40.631274] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.507 [2024-11-10 15:25:40.631281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.507 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.508 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.508 "name": "Existed_Raid", 00:16:34.508 "uuid": "d94561a5-3732-45a7-8cc4-22c8f8d63a42", 00:16:34.508 "strip_size_kb": 0, 00:16:34.508 "state": "configuring", 00:16:34.508 "raid_level": "raid1", 00:16:34.508 "superblock": true, 00:16:34.508 "num_base_bdevs": 2, 00:16:34.508 "num_base_bdevs_discovered": 0, 00:16:34.508 "num_base_bdevs_operational": 2, 00:16:34.508 "base_bdevs_list": [ 00:16:34.508 { 00:16:34.508 "name": "BaseBdev1", 00:16:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.508 "is_configured": false, 00:16:34.508 "data_offset": 0, 00:16:34.508 "data_size": 0 00:16:34.508 }, 00:16:34.508 { 00:16:34.508 "name": "BaseBdev2", 00:16:34.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.508 "is_configured": false, 00:16:34.508 "data_offset": 0, 00:16:34.508 "data_size": 0 00:16:34.508 } 00:16:34.508 ] 00:16:34.508 }' 00:16:34.508 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.508 15:25:40 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.767 [2024-11-10 15:25:41.111207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:34.767 [2024-11-10 15:25:41.111309] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.767 [2024-11-10 15:25:41.123227] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:34.767 [2024-11-10 15:25:41.123292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:34.767 [2024-11-10 15:25:41.123336] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:34.767 [2024-11-10 15:25:41.123355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:34.767 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.027 [2024-11-10 15:25:41.150684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.027 BaseBdev1 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.027 [ 00:16:35.027 { 00:16:35.027 "name": "BaseBdev1", 00:16:35.027 "aliases": [ 00:16:35.027 "c14d372e-8635-4c8b-8de2-5f9953eb8814" 00:16:35.027 ], 00:16:35.027 "product_name": "Malloc disk", 00:16:35.027 "block_size": 4096, 00:16:35.027 "num_blocks": 8192, 00:16:35.027 "uuid": "c14d372e-8635-4c8b-8de2-5f9953eb8814", 00:16:35.027 "assigned_rate_limits": { 00:16:35.027 "rw_ios_per_sec": 0, 00:16:35.027 "rw_mbytes_per_sec": 0, 00:16:35.027 "r_mbytes_per_sec": 0, 00:16:35.027 "w_mbytes_per_sec": 0 00:16:35.027 }, 00:16:35.027 "claimed": true, 00:16:35.027 "claim_type": "exclusive_write", 00:16:35.027 "zoned": false, 00:16:35.027 "supported_io_types": { 00:16:35.027 "read": true, 00:16:35.027 "write": true, 00:16:35.027 "unmap": true, 00:16:35.027 "flush": true, 00:16:35.027 "reset": true, 00:16:35.027 "nvme_admin": false, 00:16:35.027 "nvme_io": false, 00:16:35.027 "nvme_io_md": false, 00:16:35.027 "write_zeroes": true, 00:16:35.027 "zcopy": true, 00:16:35.027 "get_zone_info": false, 00:16:35.027 "zone_management": false, 00:16:35.027 "zone_append": false, 00:16:35.027 "compare": false, 00:16:35.027 "compare_and_write": false, 00:16:35.027 "abort": true, 00:16:35.027 "seek_hole": false, 00:16:35.027 "seek_data": false, 00:16:35.027 "copy": true, 00:16:35.027 "nvme_iov_md": false 00:16:35.027 }, 00:16:35.027 "memory_domains": [ 00:16:35.027 { 00:16:35.027 "dma_device_id": "system", 00:16:35.027 "dma_device_type": 1 00:16:35.027 }, 00:16:35.027 { 00:16:35.027 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.027 "dma_device_type": 2 00:16:35.027 } 00:16:35.027 ], 00:16:35.027 "driver_specific": {} 00:16:35.027 } 00:16:35.027 ] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.027 "name": "Existed_Raid", 00:16:35.027 "uuid": "badc0bf9-5656-4383-a82c-5e458a8415d5", 00:16:35.027 "strip_size_kb": 0, 00:16:35.027 "state": "configuring", 00:16:35.027 "raid_level": "raid1", 00:16:35.027 "superblock": true, 00:16:35.027 "num_base_bdevs": 2, 00:16:35.027 "num_base_bdevs_discovered": 1, 00:16:35.027 "num_base_bdevs_operational": 2, 00:16:35.027 "base_bdevs_list": [ 00:16:35.027 { 00:16:35.027 "name": "BaseBdev1", 00:16:35.027 "uuid": "c14d372e-8635-4c8b-8de2-5f9953eb8814", 00:16:35.027 "is_configured": true, 00:16:35.027 "data_offset": 256, 00:16:35.027 "data_size": 7936 00:16:35.027 }, 00:16:35.027 { 00:16:35.027 "name": "BaseBdev2", 00:16:35.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.027 "is_configured": false, 00:16:35.027 "data_offset": 0, 00:16:35.027 "data_size": 0 00:16:35.027 } 00:16:35.027 ] 00:16:35.027 }' 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.027 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.287 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:35.287 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.287 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.287 [2024-11-10 15:25:41.642797] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:35.287 [2024-11-10 15:25:41.642846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 [2024-11-10 15:25:41.654850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:35.547 [2024-11-10 15:25:41.656921] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:35.547 [2024-11-10 15:25:41.657000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.547 "name": "Existed_Raid", 00:16:35.547 "uuid": "32e9a44d-7048-4369-8d70-aa22423e818b", 00:16:35.547 "strip_size_kb": 0, 00:16:35.547 "state": "configuring", 00:16:35.547 "raid_level": "raid1", 00:16:35.547 "superblock": true, 00:16:35.547 "num_base_bdevs": 2, 00:16:35.547 "num_base_bdevs_discovered": 1, 00:16:35.547 "num_base_bdevs_operational": 2, 00:16:35.547 "base_bdevs_list": [ 00:16:35.547 { 00:16:35.547 "name": "BaseBdev1", 00:16:35.547 "uuid": "c14d372e-8635-4c8b-8de2-5f9953eb8814", 00:16:35.547 "is_configured": true, 00:16:35.547 "data_offset": 256, 00:16:35.547 "data_size": 7936 00:16:35.547 }, 00:16:35.547 { 00:16:35.547 "name": "BaseBdev2", 00:16:35.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.547 "is_configured": false, 00:16:35.547 "data_offset": 0, 00:16:35.547 "data_size": 0 00:16:35.547 } 00:16:35.547 ] 00:16:35.547 }' 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.547 15:25:41 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 [2024-11-10 15:25:42.096029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:35.807 [2024-11-10 15:25:42.096334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:35.807 [2024-11-10 15:25:42.096391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.807 [2024-11-10 15:25:42.096737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:35.807 BaseBdev2 00:16:35.807 [2024-11-10 15:25:42.096934] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:35.807 [2024-11-10 15:25:42.096980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:16:35.807 [2024-11-10 15:25:42.097184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local i 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.807 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.807 [ 00:16:35.807 { 00:16:35.808 "name": "BaseBdev2", 00:16:35.808 "aliases": [ 00:16:35.808 "7514c820-8cf5-4f42-94c2-1faae6d35f1c" 00:16:35.808 ], 00:16:35.808 "product_name": "Malloc disk", 00:16:35.808 "block_size": 4096, 00:16:35.808 "num_blocks": 8192, 00:16:35.808 "uuid": "7514c820-8cf5-4f42-94c2-1faae6d35f1c", 00:16:35.808 "assigned_rate_limits": { 00:16:35.808 "rw_ios_per_sec": 0, 00:16:35.808 "rw_mbytes_per_sec": 0, 00:16:35.808 "r_mbytes_per_sec": 0, 00:16:35.808 "w_mbytes_per_sec": 0 00:16:35.808 }, 00:16:35.808 "claimed": true, 00:16:35.808 "claim_type": "exclusive_write", 00:16:35.808 "zoned": false, 00:16:35.808 "supported_io_types": { 00:16:35.808 "read": true, 00:16:35.808 "write": true, 00:16:35.808 "unmap": true, 00:16:35.808 "flush": true, 00:16:35.808 "reset": true, 00:16:35.808 "nvme_admin": false, 00:16:35.808 "nvme_io": false, 00:16:35.808 "nvme_io_md": false, 00:16:35.808 "write_zeroes": true, 00:16:35.808 "zcopy": true, 00:16:35.808 "get_zone_info": false, 00:16:35.808 "zone_management": false, 00:16:35.808 "zone_append": false, 00:16:35.808 "compare": false, 00:16:35.808 "compare_and_write": false, 00:16:35.808 "abort": true, 00:16:35.808 "seek_hole": false, 00:16:35.808 "seek_data": false, 00:16:35.808 "copy": true, 00:16:35.808 "nvme_iov_md": false 00:16:35.808 }, 00:16:35.808 "memory_domains": [ 00:16:35.808 { 00:16:35.808 "dma_device_id": "system", 00:16:35.808 "dma_device_type": 1 00:16:35.808 }, 00:16:35.808 { 00:16:35.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.808 "dma_device_type": 2 00:16:35.808 } 00:16:35.808 ], 00:16:35.808 "driver_specific": {} 00:16:35.808 } 00:16:35.808 ] 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@909 -- # return 0 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.808 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.068 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.068 "name": "Existed_Raid", 00:16:36.068 "uuid": "32e9a44d-7048-4369-8d70-aa22423e818b", 00:16:36.068 "strip_size_kb": 0, 00:16:36.068 "state": "online", 00:16:36.068 "raid_level": "raid1", 00:16:36.068 "superblock": true, 00:16:36.068 "num_base_bdevs": 2, 00:16:36.068 "num_base_bdevs_discovered": 2, 00:16:36.068 "num_base_bdevs_operational": 2, 00:16:36.068 "base_bdevs_list": [ 00:16:36.068 { 00:16:36.068 "name": "BaseBdev1", 00:16:36.068 "uuid": "c14d372e-8635-4c8b-8de2-5f9953eb8814", 00:16:36.068 "is_configured": true, 00:16:36.068 "data_offset": 256, 00:16:36.068 "data_size": 7936 00:16:36.068 }, 00:16:36.068 { 00:16:36.068 "name": "BaseBdev2", 00:16:36.068 "uuid": "7514c820-8cf5-4f42-94c2-1faae6d35f1c", 00:16:36.068 "is_configured": true, 00:16:36.068 "data_offset": 256, 00:16:36.068 "data_size": 7936 00:16:36.068 } 00:16:36.068 ] 00:16:36.068 }' 00:16:36.068 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.068 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.328 [2024-11-10 15:25:42.596415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.328 "name": "Existed_Raid", 00:16:36.328 "aliases": [ 00:16:36.328 "32e9a44d-7048-4369-8d70-aa22423e818b" 00:16:36.328 ], 00:16:36.328 "product_name": "Raid Volume", 00:16:36.328 "block_size": 4096, 00:16:36.328 "num_blocks": 7936, 00:16:36.328 "uuid": "32e9a44d-7048-4369-8d70-aa22423e818b", 00:16:36.328 "assigned_rate_limits": { 00:16:36.328 "rw_ios_per_sec": 0, 00:16:36.328 "rw_mbytes_per_sec": 0, 00:16:36.328 "r_mbytes_per_sec": 0, 00:16:36.328 "w_mbytes_per_sec": 0 00:16:36.328 }, 00:16:36.328 "claimed": false, 00:16:36.328 "zoned": false, 00:16:36.328 "supported_io_types": { 00:16:36.328 "read": true, 00:16:36.328 "write": true, 00:16:36.328 "unmap": false, 00:16:36.328 "flush": false, 00:16:36.328 "reset": true, 00:16:36.328 "nvme_admin": false, 00:16:36.328 "nvme_io": false, 00:16:36.328 "nvme_io_md": false, 00:16:36.328 "write_zeroes": true, 00:16:36.328 "zcopy": false, 00:16:36.328 "get_zone_info": false, 00:16:36.328 "zone_management": false, 00:16:36.328 "zone_append": false, 00:16:36.328 "compare": false, 00:16:36.328 "compare_and_write": false, 00:16:36.328 "abort": false, 00:16:36.328 "seek_hole": false, 00:16:36.328 "seek_data": false, 00:16:36.328 "copy": false, 00:16:36.328 "nvme_iov_md": false 00:16:36.328 }, 00:16:36.328 "memory_domains": [ 00:16:36.328 { 00:16:36.328 "dma_device_id": "system", 00:16:36.328 "dma_device_type": 1 00:16:36.328 }, 00:16:36.328 { 00:16:36.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.328 "dma_device_type": 2 00:16:36.328 }, 00:16:36.328 { 00:16:36.328 "dma_device_id": "system", 00:16:36.328 "dma_device_type": 1 00:16:36.328 }, 00:16:36.328 { 00:16:36.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.328 "dma_device_type": 2 00:16:36.328 } 00:16:36.328 ], 00:16:36.328 "driver_specific": { 00:16:36.328 "raid": { 00:16:36.328 "uuid": "32e9a44d-7048-4369-8d70-aa22423e818b", 00:16:36.328 "strip_size_kb": 0, 00:16:36.328 "state": "online", 00:16:36.328 "raid_level": "raid1", 00:16:36.328 "superblock": true, 00:16:36.328 "num_base_bdevs": 2, 00:16:36.328 "num_base_bdevs_discovered": 2, 00:16:36.328 "num_base_bdevs_operational": 2, 00:16:36.328 "base_bdevs_list": [ 00:16:36.328 { 00:16:36.328 "name": "BaseBdev1", 00:16:36.328 "uuid": "c14d372e-8635-4c8b-8de2-5f9953eb8814", 00:16:36.328 "is_configured": true, 00:16:36.328 "data_offset": 256, 00:16:36.328 "data_size": 7936 00:16:36.328 }, 00:16:36.328 { 00:16:36.328 "name": "BaseBdev2", 00:16:36.328 "uuid": "7514c820-8cf5-4f42-94c2-1faae6d35f1c", 00:16:36.328 "is_configured": true, 00:16:36.328 "data_offset": 256, 00:16:36.328 "data_size": 7936 00:16:36.328 } 00:16:36.328 ] 00:16:36.328 } 00:16:36.328 } 00:16:36.328 }' 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.328 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:36.328 BaseBdev2' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.588 [2024-11-10 15:25:42.844288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.588 "name": "Existed_Raid", 00:16:36.588 "uuid": "32e9a44d-7048-4369-8d70-aa22423e818b", 00:16:36.588 "strip_size_kb": 0, 00:16:36.588 "state": "online", 00:16:36.588 "raid_level": "raid1", 00:16:36.588 "superblock": true, 00:16:36.588 "num_base_bdevs": 2, 00:16:36.588 "num_base_bdevs_discovered": 1, 00:16:36.588 "num_base_bdevs_operational": 1, 00:16:36.588 "base_bdevs_list": [ 00:16:36.588 { 00:16:36.588 "name": null, 00:16:36.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.588 "is_configured": false, 00:16:36.588 "data_offset": 0, 00:16:36.588 "data_size": 7936 00:16:36.588 }, 00:16:36.588 { 00:16:36.588 "name": "BaseBdev2", 00:16:36.588 "uuid": "7514c820-8cf5-4f42-94c2-1faae6d35f1c", 00:16:36.588 "is_configured": true, 00:16:36.588 "data_offset": 256, 00:16:36.588 "data_size": 7936 00:16:36.588 } 00:16:36.588 ] 00:16:36.588 }' 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.588 15:25:42 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 [2024-11-10 15:25:43.368517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.158 [2024-11-10 15:25:43.368615] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.158 [2024-11-10 15:25:43.389529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.158 [2024-11-10 15:25:43.389656] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.158 [2024-11-10 15:25:43.389672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 97709 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 97709 ']' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 97709 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97709 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97709' 00:16:37.158 killing process with pid 97709 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@971 -- # kill 97709 00:16:37.158 [2024-11-10 15:25:43.491066] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:37.158 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@976 -- # wait 97709 00:16:37.158 [2024-11-10 15:25:43.492616] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:37.728 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:37.728 00:16:37.728 real 0m4.144s 00:16:37.728 user 0m6.343s 00:16:37.728 sys 0m0.990s 00:16:37.728 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:37.728 15:25:43 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.728 ************************************ 00:16:37.728 END TEST raid_state_function_test_sb_4k 00:16:37.728 ************************************ 00:16:37.728 15:25:43 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:37.728 15:25:43 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:16:37.728 15:25:43 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:37.728 15:25:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:37.728 ************************************ 00:16:37.728 START TEST raid_superblock_test_4k 00:16:37.728 ************************************ 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:37.728 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=97956 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 97956 00:16:37.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # '[' -z 97956 ']' 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:37.729 15:25:43 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.729 [2024-11-10 15:25:44.000921] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:16:37.729 [2024-11-10 15:25:44.001129] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97956 ] 00:16:37.988 [2024-11-10 15:25:44.134849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:37.988 [2024-11-10 15:25:44.175970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.988 [2024-11-10 15:25:44.217513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.988 [2024-11-10 15:25:44.296597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:37.988 [2024-11-10 15:25:44.296738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@866 -- # return 0 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.558 malloc1 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.558 [2024-11-10 15:25:44.865551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.558 [2024-11-10 15:25:44.865681] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.558 [2024-11-10 15:25:44.865727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:38.558 [2024-11-10 15:25:44.865762] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.558 [2024-11-10 15:25:44.868311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.558 [2024-11-10 15:25:44.868380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.558 pt1 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.558 malloc2 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.558 [2024-11-10 15:25:44.904912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:38.558 [2024-11-10 15:25:44.905004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.558 [2024-11-10 15:25:44.905053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:38.558 [2024-11-10 15:25:44.905081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.558 [2024-11-10 15:25:44.907599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.558 [2024-11-10 15:25:44.907635] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:38.558 pt2 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.558 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.559 [2024-11-10 15:25:44.916945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.818 [2024-11-10 15:25:44.919147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.818 [2024-11-10 15:25:44.919294] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:38.818 [2024-11-10 15:25:44.919308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:38.818 [2024-11-10 15:25:44.919601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:38.818 [2024-11-10 15:25:44.919734] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:38.819 [2024-11-10 15:25:44.919749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:38.819 [2024-11-10 15:25:44.919871] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.819 "name": "raid_bdev1", 00:16:38.819 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:38.819 "strip_size_kb": 0, 00:16:38.819 "state": "online", 00:16:38.819 "raid_level": "raid1", 00:16:38.819 "superblock": true, 00:16:38.819 "num_base_bdevs": 2, 00:16:38.819 "num_base_bdevs_discovered": 2, 00:16:38.819 "num_base_bdevs_operational": 2, 00:16:38.819 "base_bdevs_list": [ 00:16:38.819 { 00:16:38.819 "name": "pt1", 00:16:38.819 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:38.819 "is_configured": true, 00:16:38.819 "data_offset": 256, 00:16:38.819 "data_size": 7936 00:16:38.819 }, 00:16:38.819 { 00:16:38.819 "name": "pt2", 00:16:38.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.819 "is_configured": true, 00:16:38.819 "data_offset": 256, 00:16:38.819 "data_size": 7936 00:16:38.819 } 00:16:38.819 ] 00:16:38.819 }' 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.819 15:25:44 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.078 [2024-11-10 15:25:45.377317] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.078 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:39.078 "name": "raid_bdev1", 00:16:39.078 "aliases": [ 00:16:39.078 "a688ebf9-d1c3-4d75-b266-8ad2199970b5" 00:16:39.078 ], 00:16:39.078 "product_name": "Raid Volume", 00:16:39.078 "block_size": 4096, 00:16:39.078 "num_blocks": 7936, 00:16:39.078 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:39.078 "assigned_rate_limits": { 00:16:39.078 "rw_ios_per_sec": 0, 00:16:39.078 "rw_mbytes_per_sec": 0, 00:16:39.079 "r_mbytes_per_sec": 0, 00:16:39.079 "w_mbytes_per_sec": 0 00:16:39.079 }, 00:16:39.079 "claimed": false, 00:16:39.079 "zoned": false, 00:16:39.079 "supported_io_types": { 00:16:39.079 "read": true, 00:16:39.079 "write": true, 00:16:39.079 "unmap": false, 00:16:39.079 "flush": false, 00:16:39.079 "reset": true, 00:16:39.079 "nvme_admin": false, 00:16:39.079 "nvme_io": false, 00:16:39.079 "nvme_io_md": false, 00:16:39.079 "write_zeroes": true, 00:16:39.079 "zcopy": false, 00:16:39.079 "get_zone_info": false, 00:16:39.079 "zone_management": false, 00:16:39.079 "zone_append": false, 00:16:39.079 "compare": false, 00:16:39.079 "compare_and_write": false, 00:16:39.079 "abort": false, 00:16:39.079 "seek_hole": false, 00:16:39.079 "seek_data": false, 00:16:39.079 "copy": false, 00:16:39.079 "nvme_iov_md": false 00:16:39.079 }, 00:16:39.079 "memory_domains": [ 00:16:39.079 { 00:16:39.079 "dma_device_id": "system", 00:16:39.079 "dma_device_type": 1 00:16:39.079 }, 00:16:39.079 { 00:16:39.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.079 "dma_device_type": 2 00:16:39.079 }, 00:16:39.079 { 00:16:39.079 "dma_device_id": "system", 00:16:39.079 "dma_device_type": 1 00:16:39.079 }, 00:16:39.079 { 00:16:39.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:39.079 "dma_device_type": 2 00:16:39.079 } 00:16:39.079 ], 00:16:39.079 "driver_specific": { 00:16:39.079 "raid": { 00:16:39.079 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:39.079 "strip_size_kb": 0, 00:16:39.079 "state": "online", 00:16:39.079 "raid_level": "raid1", 00:16:39.079 "superblock": true, 00:16:39.079 "num_base_bdevs": 2, 00:16:39.079 "num_base_bdevs_discovered": 2, 00:16:39.079 "num_base_bdevs_operational": 2, 00:16:39.079 "base_bdevs_list": [ 00:16:39.079 { 00:16:39.079 "name": "pt1", 00:16:39.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.079 "is_configured": true, 00:16:39.079 "data_offset": 256, 00:16:39.079 "data_size": 7936 00:16:39.079 }, 00:16:39.079 { 00:16:39.079 "name": "pt2", 00:16:39.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.079 "is_configured": true, 00:16:39.079 "data_offset": 256, 00:16:39.079 "data_size": 7936 00:16:39.079 } 00:16:39.079 ] 00:16:39.079 } 00:16:39.079 } 00:16:39.079 }' 00:16:39.079 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:39.339 pt2' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.339 [2024-11-10 15:25:45.613298] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a688ebf9-d1c3-4d75-b266-8ad2199970b5 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z a688ebf9-d1c3-4d75-b266-8ad2199970b5 ']' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.339 [2024-11-10 15:25:45.657095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.339 [2024-11-10 15:25:45.657117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:39.339 [2024-11-10 15:25:45.657184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:39.339 [2024-11-10 15:25:45.657232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:39.339 [2024-11-10 15:25:45.657244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.339 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.599 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.599 [2024-11-10 15:25:45.805151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:39.599 [2024-11-10 15:25:45.807245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:39.599 [2024-11-10 15:25:45.807338] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:39.599 [2024-11-10 15:25:45.807432] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:39.599 [2024-11-10 15:25:45.807530] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:39.599 [2024-11-10 15:25:45.807574] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:16:39.599 request: 00:16:39.599 { 00:16:39.599 "name": "raid_bdev1", 00:16:39.599 "raid_level": "raid1", 00:16:39.599 "base_bdevs": [ 00:16:39.599 "malloc1", 00:16:39.599 "malloc2" 00:16:39.600 ], 00:16:39.600 "superblock": false, 00:16:39.600 "method": "bdev_raid_create", 00:16:39.600 "req_id": 1 00:16:39.600 } 00:16:39.600 Got JSON-RPC error response 00:16:39.600 response: 00:16:39.600 { 00:16:39.600 "code": -17, 00:16:39.600 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:39.600 } 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 [2024-11-10 15:25:45.853165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:39.600 [2024-11-10 15:25:45.853249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:39.600 [2024-11-10 15:25:45.853266] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:39.600 [2024-11-10 15:25:45.853280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:39.600 [2024-11-10 15:25:45.855632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:39.600 [2024-11-10 15:25:45.855669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:39.600 [2024-11-10 15:25:45.855722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:39.600 [2024-11-10 15:25:45.855781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:39.600 pt1 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.600 "name": "raid_bdev1", 00:16:39.600 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:39.600 "strip_size_kb": 0, 00:16:39.600 "state": "configuring", 00:16:39.600 "raid_level": "raid1", 00:16:39.600 "superblock": true, 00:16:39.600 "num_base_bdevs": 2, 00:16:39.600 "num_base_bdevs_discovered": 1, 00:16:39.600 "num_base_bdevs_operational": 2, 00:16:39.600 "base_bdevs_list": [ 00:16:39.600 { 00:16:39.600 "name": "pt1", 00:16:39.600 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:39.600 "is_configured": true, 00:16:39.600 "data_offset": 256, 00:16:39.600 "data_size": 7936 00:16:39.600 }, 00:16:39.600 { 00:16:39.600 "name": null, 00:16:39.600 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:39.600 "is_configured": false, 00:16:39.600 "data_offset": 256, 00:16:39.600 "data_size": 7936 00:16:39.600 } 00:16:39.600 ] 00:16:39.600 }' 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.600 15:25:45 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.170 [2024-11-10 15:25:46.265257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.170 [2024-11-10 15:25:46.265364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.170 [2024-11-10 15:25:46.265398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:40.170 [2024-11-10 15:25:46.265427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.170 [2024-11-10 15:25:46.265801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.170 [2024-11-10 15:25:46.265861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.170 [2024-11-10 15:25:46.265940] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.170 [2024-11-10 15:25:46.265990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.170 [2024-11-10 15:25:46.266108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:40.170 [2024-11-10 15:25:46.266156] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:40.170 [2024-11-10 15:25:46.266408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:40.170 [2024-11-10 15:25:46.266561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:40.170 [2024-11-10 15:25:46.266602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:40.170 [2024-11-10 15:25:46.266737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.170 pt2 00:16:40.170 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.171 "name": "raid_bdev1", 00:16:40.171 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:40.171 "strip_size_kb": 0, 00:16:40.171 "state": "online", 00:16:40.171 "raid_level": "raid1", 00:16:40.171 "superblock": true, 00:16:40.171 "num_base_bdevs": 2, 00:16:40.171 "num_base_bdevs_discovered": 2, 00:16:40.171 "num_base_bdevs_operational": 2, 00:16:40.171 "base_bdevs_list": [ 00:16:40.171 { 00:16:40.171 "name": "pt1", 00:16:40.171 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.171 "is_configured": true, 00:16:40.171 "data_offset": 256, 00:16:40.171 "data_size": 7936 00:16:40.171 }, 00:16:40.171 { 00:16:40.171 "name": "pt2", 00:16:40.171 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.171 "is_configured": true, 00:16:40.171 "data_offset": 256, 00:16:40.171 "data_size": 7936 00:16:40.171 } 00:16:40.171 ] 00:16:40.171 }' 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.171 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.431 [2024-11-10 15:25:46.701587] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:40.431 "name": "raid_bdev1", 00:16:40.431 "aliases": [ 00:16:40.431 "a688ebf9-d1c3-4d75-b266-8ad2199970b5" 00:16:40.431 ], 00:16:40.431 "product_name": "Raid Volume", 00:16:40.431 "block_size": 4096, 00:16:40.431 "num_blocks": 7936, 00:16:40.431 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:40.431 "assigned_rate_limits": { 00:16:40.431 "rw_ios_per_sec": 0, 00:16:40.431 "rw_mbytes_per_sec": 0, 00:16:40.431 "r_mbytes_per_sec": 0, 00:16:40.431 "w_mbytes_per_sec": 0 00:16:40.431 }, 00:16:40.431 "claimed": false, 00:16:40.431 "zoned": false, 00:16:40.431 "supported_io_types": { 00:16:40.431 "read": true, 00:16:40.431 "write": true, 00:16:40.431 "unmap": false, 00:16:40.431 "flush": false, 00:16:40.431 "reset": true, 00:16:40.431 "nvme_admin": false, 00:16:40.431 "nvme_io": false, 00:16:40.431 "nvme_io_md": false, 00:16:40.431 "write_zeroes": true, 00:16:40.431 "zcopy": false, 00:16:40.431 "get_zone_info": false, 00:16:40.431 "zone_management": false, 00:16:40.431 "zone_append": false, 00:16:40.431 "compare": false, 00:16:40.431 "compare_and_write": false, 00:16:40.431 "abort": false, 00:16:40.431 "seek_hole": false, 00:16:40.431 "seek_data": false, 00:16:40.431 "copy": false, 00:16:40.431 "nvme_iov_md": false 00:16:40.431 }, 00:16:40.431 "memory_domains": [ 00:16:40.431 { 00:16:40.431 "dma_device_id": "system", 00:16:40.431 "dma_device_type": 1 00:16:40.431 }, 00:16:40.431 { 00:16:40.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.431 "dma_device_type": 2 00:16:40.431 }, 00:16:40.431 { 00:16:40.431 "dma_device_id": "system", 00:16:40.431 "dma_device_type": 1 00:16:40.431 }, 00:16:40.431 { 00:16:40.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:40.431 "dma_device_type": 2 00:16:40.431 } 00:16:40.431 ], 00:16:40.431 "driver_specific": { 00:16:40.431 "raid": { 00:16:40.431 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:40.431 "strip_size_kb": 0, 00:16:40.431 "state": "online", 00:16:40.431 "raid_level": "raid1", 00:16:40.431 "superblock": true, 00:16:40.431 "num_base_bdevs": 2, 00:16:40.431 "num_base_bdevs_discovered": 2, 00:16:40.431 "num_base_bdevs_operational": 2, 00:16:40.431 "base_bdevs_list": [ 00:16:40.431 { 00:16:40.431 "name": "pt1", 00:16:40.431 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:40.431 "is_configured": true, 00:16:40.431 "data_offset": 256, 00:16:40.431 "data_size": 7936 00:16:40.431 }, 00:16:40.431 { 00:16:40.431 "name": "pt2", 00:16:40.431 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.431 "is_configured": true, 00:16:40.431 "data_offset": 256, 00:16:40.431 "data_size": 7936 00:16:40.431 } 00:16:40.431 ] 00:16:40.431 } 00:16:40.431 } 00:16:40.431 }' 00:16:40.431 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:40.692 pt2' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:40.692 [2024-11-10 15:25:46.945657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' a688ebf9-d1c3-4d75-b266-8ad2199970b5 '!=' a688ebf9-d1c3-4d75-b266-8ad2199970b5 ']' 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.692 [2024-11-10 15:25:46.993466] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.692 15:25:46 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.692 "name": "raid_bdev1", 00:16:40.692 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:40.692 "strip_size_kb": 0, 00:16:40.692 "state": "online", 00:16:40.692 "raid_level": "raid1", 00:16:40.692 "superblock": true, 00:16:40.692 "num_base_bdevs": 2, 00:16:40.692 "num_base_bdevs_discovered": 1, 00:16:40.692 "num_base_bdevs_operational": 1, 00:16:40.692 "base_bdevs_list": [ 00:16:40.692 { 00:16:40.692 "name": null, 00:16:40.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.692 "is_configured": false, 00:16:40.692 "data_offset": 0, 00:16:40.692 "data_size": 7936 00:16:40.692 }, 00:16:40.692 { 00:16:40.692 "name": "pt2", 00:16:40.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.692 "is_configured": true, 00:16:40.692 "data_offset": 256, 00:16:40.692 "data_size": 7936 00:16:40.692 } 00:16:40.692 ] 00:16:40.692 }' 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.692 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 [2024-11-10 15:25:47.421552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.262 [2024-11-10 15:25:47.421614] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.262 [2024-11-10 15:25:47.421684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.262 [2024-11-10 15:25:47.421734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.262 [2024-11-10 15:25:47.421799] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 [2024-11-10 15:25:47.497578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:41.262 [2024-11-10 15:25:47.497679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.262 [2024-11-10 15:25:47.497696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:41.262 [2024-11-10 15:25:47.497707] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.262 [2024-11-10 15:25:47.500143] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.262 [2024-11-10 15:25:47.500184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:41.262 [2024-11-10 15:25:47.500241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:41.262 [2024-11-10 15:25:47.500273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.262 [2024-11-10 15:25:47.500335] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:41.262 [2024-11-10 15:25:47.500345] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:41.262 [2024-11-10 15:25:47.500561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:41.262 [2024-11-10 15:25:47.500677] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:41.262 [2024-11-10 15:25:47.500734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:41.262 [2024-11-10 15:25:47.500839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.262 pt2 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.262 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.262 "name": "raid_bdev1", 00:16:41.262 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:41.262 "strip_size_kb": 0, 00:16:41.262 "state": "online", 00:16:41.262 "raid_level": "raid1", 00:16:41.262 "superblock": true, 00:16:41.262 "num_base_bdevs": 2, 00:16:41.262 "num_base_bdevs_discovered": 1, 00:16:41.262 "num_base_bdevs_operational": 1, 00:16:41.263 "base_bdevs_list": [ 00:16:41.263 { 00:16:41.263 "name": null, 00:16:41.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.263 "is_configured": false, 00:16:41.263 "data_offset": 256, 00:16:41.263 "data_size": 7936 00:16:41.263 }, 00:16:41.263 { 00:16:41.263 "name": "pt2", 00:16:41.263 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.263 "is_configured": true, 00:16:41.263 "data_offset": 256, 00:16:41.263 "data_size": 7936 00:16:41.263 } 00:16:41.263 ] 00:16:41.263 }' 00:16:41.263 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.263 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.832 [2024-11-10 15:25:47.953713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.832 [2024-11-10 15:25:47.953777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:41.832 [2024-11-10 15:25:47.953857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:41.832 [2024-11-10 15:25:47.953911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:41.832 [2024-11-10 15:25:47.953941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.832 15:25:47 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.832 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:41.832 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:41.832 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:41.832 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.833 [2024-11-10 15:25:48.013713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:41.833 [2024-11-10 15:25:48.013811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.833 [2024-11-10 15:25:48.013834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:41.833 [2024-11-10 15:25:48.013843] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.833 [2024-11-10 15:25:48.016234] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.833 [2024-11-10 15:25:48.016269] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:41.833 [2024-11-10 15:25:48.016326] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:41.833 [2024-11-10 15:25:48.016353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:41.833 [2024-11-10 15:25:48.016448] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:41.833 [2024-11-10 15:25:48.016465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:41.833 [2024-11-10 15:25:48.016482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:16:41.833 [2024-11-10 15:25:48.016517] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:41.833 [2024-11-10 15:25:48.016580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:16:41.833 [2024-11-10 15:25:48.016587] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:41.833 [2024-11-10 15:25:48.016847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:41.833 [2024-11-10 15:25:48.017025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:16:41.833 [2024-11-10 15:25:48.017044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:16:41.833 [2024-11-10 15:25:48.017149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.833 pt1 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.833 "name": "raid_bdev1", 00:16:41.833 "uuid": "a688ebf9-d1c3-4d75-b266-8ad2199970b5", 00:16:41.833 "strip_size_kb": 0, 00:16:41.833 "state": "online", 00:16:41.833 "raid_level": "raid1", 00:16:41.833 "superblock": true, 00:16:41.833 "num_base_bdevs": 2, 00:16:41.833 "num_base_bdevs_discovered": 1, 00:16:41.833 "num_base_bdevs_operational": 1, 00:16:41.833 "base_bdevs_list": [ 00:16:41.833 { 00:16:41.833 "name": null, 00:16:41.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.833 "is_configured": false, 00:16:41.833 "data_offset": 256, 00:16:41.833 "data_size": 7936 00:16:41.833 }, 00:16:41.833 { 00:16:41.833 "name": "pt2", 00:16:41.833 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.833 "is_configured": true, 00:16:41.833 "data_offset": 256, 00:16:41.833 "data_size": 7936 00:16:41.833 } 00:16:41.833 ] 00:16:41.833 }' 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.833 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.093 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.353 [2024-11-10 15:25:48.458068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' a688ebf9-d1c3-4d75-b266-8ad2199970b5 '!=' a688ebf9-d1c3-4d75-b266-8ad2199970b5 ']' 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 97956 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # '[' -z 97956 ']' 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # kill -0 97956 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # uname 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 97956 00:16:42.353 killing process with pid 97956 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 97956' 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@971 -- # kill 97956 00:16:42.353 [2024-11-10 15:25:48.536919] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:42.353 [2024-11-10 15:25:48.536985] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.353 [2024-11-10 15:25:48.537034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.353 [2024-11-10 15:25:48.537046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:16:42.353 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@976 -- # wait 97956 00:16:42.353 [2024-11-10 15:25:48.579058] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.619 15:25:48 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:42.619 00:16:42.619 real 0m5.001s 00:16:42.619 user 0m7.964s 00:16:42.619 sys 0m1.186s 00:16:42.619 ************************************ 00:16:42.619 END TEST raid_superblock_test_4k 00:16:42.619 ************************************ 00:16:42.619 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:42.619 15:25:48 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.619 15:25:48 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:42.619 15:25:48 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:42.619 15:25:48 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:16:42.619 15:25:48 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:42.619 15:25:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.893 ************************************ 00:16:42.893 START TEST raid_rebuild_test_sb_4k 00:16:42.893 ************************************ 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:42.893 15:25:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=98263 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 98263 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@833 -- # '[' -z 98263 ']' 00:16:42.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.893 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.894 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:42.894 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.894 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:42.894 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.894 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:42.894 Zero copy mechanism will not be used. 00:16:42.894 [2024-11-10 15:25:49.102279] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:16:42.894 [2024-11-10 15:25:49.102419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98263 ] 00:16:42.894 [2024-11-10 15:25:49.240542] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:43.174 [2024-11-10 15:25:49.278668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.174 [2024-11-10 15:25:49.319952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.174 [2024-11-10 15:25:49.399114] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.174 [2024-11-10 15:25:49.399239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@866 -- # return 0 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 BaseBdev1_malloc 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 [2024-11-10 15:25:49.947552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.776 [2024-11-10 15:25:49.947622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.776 [2024-11-10 15:25:49.947650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:43.776 [2024-11-10 15:25:49.947665] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.776 [2024-11-10 15:25:49.950108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.776 [2024-11-10 15:25:49.950194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.776 BaseBdev1 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 BaseBdev2_malloc 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 [2024-11-10 15:25:49.982447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:43.776 [2024-11-10 15:25:49.982503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.776 [2024-11-10 15:25:49.982524] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:43.776 [2024-11-10 15:25:49.982535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.776 [2024-11-10 15:25:49.984870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.776 [2024-11-10 15:25:49.984920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:43.776 BaseBdev2 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.776 15:25:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.776 spare_malloc 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 spare_delay 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 [2024-11-10 15:25:50.029564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:43.777 [2024-11-10 15:25:50.029636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.777 [2024-11-10 15:25:50.029656] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:43.777 [2024-11-10 15:25:50.029669] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.777 [2024-11-10 15:25:50.032096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.777 [2024-11-10 15:25:50.032187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:43.777 spare 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 [2024-11-10 15:25:50.041640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.777 [2024-11-10 15:25:50.043770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:43.777 [2024-11-10 15:25:50.043943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:16:43.777 [2024-11-10 15:25:50.043957] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:43.777 [2024-11-10 15:25:50.044237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:43.777 [2024-11-10 15:25:50.044408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:16:43.777 [2024-11-10 15:25:50.044418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:16:43.777 [2024-11-10 15:25:50.044534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.777 "name": "raid_bdev1", 00:16:43.777 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:43.777 "strip_size_kb": 0, 00:16:43.777 "state": "online", 00:16:43.777 "raid_level": "raid1", 00:16:43.777 "superblock": true, 00:16:43.777 "num_base_bdevs": 2, 00:16:43.777 "num_base_bdevs_discovered": 2, 00:16:43.777 "num_base_bdevs_operational": 2, 00:16:43.777 "base_bdevs_list": [ 00:16:43.777 { 00:16:43.777 "name": "BaseBdev1", 00:16:43.777 "uuid": "d362d54e-ddba-5253-ac1b-ea7b77114417", 00:16:43.777 "is_configured": true, 00:16:43.777 "data_offset": 256, 00:16:43.777 "data_size": 7936 00:16:43.777 }, 00:16:43.777 { 00:16:43.777 "name": "BaseBdev2", 00:16:43.777 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:43.777 "is_configured": true, 00:16:43.777 "data_offset": 256, 00:16:43.777 "data_size": 7936 00:16:43.777 } 00:16:43.777 ] 00:16:43.777 }' 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.777 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.347 [2024-11-10 15:25:50.493965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:44.347 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:44.608 [2024-11-10 15:25:50.745841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:44.608 /dev/nbd0 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:44.608 1+0 records in 00:16:44.608 1+0 records out 00:16:44.608 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528715 s, 7.7 MB/s 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:44.608 15:25:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:45.177 7936+0 records in 00:16:45.177 7936+0 records out 00:16:45.177 32505856 bytes (33 MB, 31 MiB) copied, 0.600307 s, 54.1 MB/s 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:45.177 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:45.437 [2024-11-10 15:25:51.636234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.437 [2024-11-10 15:25:51.648340] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.437 "name": "raid_bdev1", 00:16:45.437 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:45.437 "strip_size_kb": 0, 00:16:45.437 "state": "online", 00:16:45.437 "raid_level": "raid1", 00:16:45.437 "superblock": true, 00:16:45.437 "num_base_bdevs": 2, 00:16:45.437 "num_base_bdevs_discovered": 1, 00:16:45.437 "num_base_bdevs_operational": 1, 00:16:45.437 "base_bdevs_list": [ 00:16:45.437 { 00:16:45.437 "name": null, 00:16:45.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.437 "is_configured": false, 00:16:45.437 "data_offset": 0, 00:16:45.437 "data_size": 7936 00:16:45.437 }, 00:16:45.437 { 00:16:45.437 "name": "BaseBdev2", 00:16:45.437 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:45.437 "is_configured": true, 00:16:45.437 "data_offset": 256, 00:16:45.437 "data_size": 7936 00:16:45.437 } 00:16:45.437 ] 00:16:45.437 }' 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.437 15:25:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 15:25:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:46.007 15:25:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.007 15:25:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.007 [2024-11-10 15:25:52.076411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:46.007 [2024-11-10 15:25:52.094542] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:16:46.007 15:25:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.007 15:25:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:46.007 [2024-11-10 15:25:52.097111] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.947 "name": "raid_bdev1", 00:16:46.947 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:46.947 "strip_size_kb": 0, 00:16:46.947 "state": "online", 00:16:46.947 "raid_level": "raid1", 00:16:46.947 "superblock": true, 00:16:46.947 "num_base_bdevs": 2, 00:16:46.947 "num_base_bdevs_discovered": 2, 00:16:46.947 "num_base_bdevs_operational": 2, 00:16:46.947 "process": { 00:16:46.947 "type": "rebuild", 00:16:46.947 "target": "spare", 00:16:46.947 "progress": { 00:16:46.947 "blocks": 2560, 00:16:46.947 "percent": 32 00:16:46.947 } 00:16:46.947 }, 00:16:46.947 "base_bdevs_list": [ 00:16:46.947 { 00:16:46.947 "name": "spare", 00:16:46.947 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:46.947 "is_configured": true, 00:16:46.947 "data_offset": 256, 00:16:46.947 "data_size": 7936 00:16:46.947 }, 00:16:46.947 { 00:16:46.947 "name": "BaseBdev2", 00:16:46.947 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:46.947 "is_configured": true, 00:16:46.947 "data_offset": 256, 00:16:46.947 "data_size": 7936 00:16:46.947 } 00:16:46.947 ] 00:16:46.947 }' 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.947 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:46.947 [2024-11-10 15:25:53.258742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.207 [2024-11-10 15:25:53.307540] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.207 [2024-11-10 15:25:53.307657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.207 [2024-11-10 15:25:53.307690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.207 [2024-11-10 15:25:53.307701] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.207 "name": "raid_bdev1", 00:16:47.207 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:47.207 "strip_size_kb": 0, 00:16:47.207 "state": "online", 00:16:47.207 "raid_level": "raid1", 00:16:47.207 "superblock": true, 00:16:47.207 "num_base_bdevs": 2, 00:16:47.207 "num_base_bdevs_discovered": 1, 00:16:47.207 "num_base_bdevs_operational": 1, 00:16:47.207 "base_bdevs_list": [ 00:16:47.207 { 00:16:47.207 "name": null, 00:16:47.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.207 "is_configured": false, 00:16:47.207 "data_offset": 0, 00:16:47.207 "data_size": 7936 00:16:47.207 }, 00:16:47.207 { 00:16:47.207 "name": "BaseBdev2", 00:16:47.207 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:47.207 "is_configured": true, 00:16:47.207 "data_offset": 256, 00:16:47.207 "data_size": 7936 00:16:47.207 } 00:16:47.207 ] 00:16:47.207 }' 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.207 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.467 "name": "raid_bdev1", 00:16:47.467 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:47.467 "strip_size_kb": 0, 00:16:47.467 "state": "online", 00:16:47.467 "raid_level": "raid1", 00:16:47.467 "superblock": true, 00:16:47.467 "num_base_bdevs": 2, 00:16:47.467 "num_base_bdevs_discovered": 1, 00:16:47.467 "num_base_bdevs_operational": 1, 00:16:47.467 "base_bdevs_list": [ 00:16:47.467 { 00:16:47.467 "name": null, 00:16:47.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.467 "is_configured": false, 00:16:47.467 "data_offset": 0, 00:16:47.467 "data_size": 7936 00:16:47.467 }, 00:16:47.467 { 00:16:47.467 "name": "BaseBdev2", 00:16:47.467 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:47.467 "is_configured": true, 00:16:47.467 "data_offset": 256, 00:16:47.467 "data_size": 7936 00:16:47.467 } 00:16:47.467 ] 00:16:47.467 }' 00:16:47.467 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.727 [2024-11-10 15:25:53.892008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.727 [2024-11-10 15:25:53.899199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.727 15:25:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:47.727 [2024-11-10 15:25:53.901415] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.667 "name": "raid_bdev1", 00:16:48.667 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:48.667 "strip_size_kb": 0, 00:16:48.667 "state": "online", 00:16:48.667 "raid_level": "raid1", 00:16:48.667 "superblock": true, 00:16:48.667 "num_base_bdevs": 2, 00:16:48.667 "num_base_bdevs_discovered": 2, 00:16:48.667 "num_base_bdevs_operational": 2, 00:16:48.667 "process": { 00:16:48.667 "type": "rebuild", 00:16:48.667 "target": "spare", 00:16:48.667 "progress": { 00:16:48.667 "blocks": 2560, 00:16:48.667 "percent": 32 00:16:48.667 } 00:16:48.667 }, 00:16:48.667 "base_bdevs_list": [ 00:16:48.667 { 00:16:48.667 "name": "spare", 00:16:48.667 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:48.667 "is_configured": true, 00:16:48.667 "data_offset": 256, 00:16:48.667 "data_size": 7936 00:16:48.667 }, 00:16:48.667 { 00:16:48.667 "name": "BaseBdev2", 00:16:48.667 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:48.667 "is_configured": true, 00:16:48.667 "data_offset": 256, 00:16:48.667 "data_size": 7936 00:16:48.667 } 00:16:48.667 ] 00:16:48.667 }' 00:16:48.667 15:25:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.667 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.667 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:48.926 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=569 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.926 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.927 "name": "raid_bdev1", 00:16:48.927 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:48.927 "strip_size_kb": 0, 00:16:48.927 "state": "online", 00:16:48.927 "raid_level": "raid1", 00:16:48.927 "superblock": true, 00:16:48.927 "num_base_bdevs": 2, 00:16:48.927 "num_base_bdevs_discovered": 2, 00:16:48.927 "num_base_bdevs_operational": 2, 00:16:48.927 "process": { 00:16:48.927 "type": "rebuild", 00:16:48.927 "target": "spare", 00:16:48.927 "progress": { 00:16:48.927 "blocks": 2816, 00:16:48.927 "percent": 35 00:16:48.927 } 00:16:48.927 }, 00:16:48.927 "base_bdevs_list": [ 00:16:48.927 { 00:16:48.927 "name": "spare", 00:16:48.927 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:48.927 "is_configured": true, 00:16:48.927 "data_offset": 256, 00:16:48.927 "data_size": 7936 00:16:48.927 }, 00:16:48.927 { 00:16:48.927 "name": "BaseBdev2", 00:16:48.927 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:48.927 "is_configured": true, 00:16:48.927 "data_offset": 256, 00:16:48.927 "data_size": 7936 00:16:48.927 } 00:16:48.927 ] 00:16:48.927 }' 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.927 15:25:55 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.866 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.125 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.125 "name": "raid_bdev1", 00:16:50.126 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:50.126 "strip_size_kb": 0, 00:16:50.126 "state": "online", 00:16:50.126 "raid_level": "raid1", 00:16:50.126 "superblock": true, 00:16:50.126 "num_base_bdevs": 2, 00:16:50.126 "num_base_bdevs_discovered": 2, 00:16:50.126 "num_base_bdevs_operational": 2, 00:16:50.126 "process": { 00:16:50.126 "type": "rebuild", 00:16:50.126 "target": "spare", 00:16:50.126 "progress": { 00:16:50.126 "blocks": 5632, 00:16:50.126 "percent": 70 00:16:50.126 } 00:16:50.126 }, 00:16:50.126 "base_bdevs_list": [ 00:16:50.126 { 00:16:50.126 "name": "spare", 00:16:50.126 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:50.126 "is_configured": true, 00:16:50.126 "data_offset": 256, 00:16:50.126 "data_size": 7936 00:16:50.126 }, 00:16:50.126 { 00:16:50.126 "name": "BaseBdev2", 00:16:50.126 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:50.126 "is_configured": true, 00:16:50.126 "data_offset": 256, 00:16:50.126 "data_size": 7936 00:16:50.126 } 00:16:50.126 ] 00:16:50.126 }' 00:16:50.126 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.126 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:50.126 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.126 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.126 15:25:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:50.695 [2024-11-10 15:25:57.026228] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:50.695 [2024-11-10 15:25:57.026370] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:50.695 [2024-11-10 15:25:57.026535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.264 "name": "raid_bdev1", 00:16:51.264 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:51.264 "strip_size_kb": 0, 00:16:51.264 "state": "online", 00:16:51.264 "raid_level": "raid1", 00:16:51.264 "superblock": true, 00:16:51.264 "num_base_bdevs": 2, 00:16:51.264 "num_base_bdevs_discovered": 2, 00:16:51.264 "num_base_bdevs_operational": 2, 00:16:51.264 "base_bdevs_list": [ 00:16:51.264 { 00:16:51.264 "name": "spare", 00:16:51.264 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:51.264 "is_configured": true, 00:16:51.264 "data_offset": 256, 00:16:51.264 "data_size": 7936 00:16:51.264 }, 00:16:51.264 { 00:16:51.264 "name": "BaseBdev2", 00:16:51.264 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:51.264 "is_configured": true, 00:16:51.264 "data_offset": 256, 00:16:51.264 "data_size": 7936 00:16:51.264 } 00:16:51.264 ] 00:16:51.264 }' 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.264 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.265 "name": "raid_bdev1", 00:16:51.265 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:51.265 "strip_size_kb": 0, 00:16:51.265 "state": "online", 00:16:51.265 "raid_level": "raid1", 00:16:51.265 "superblock": true, 00:16:51.265 "num_base_bdevs": 2, 00:16:51.265 "num_base_bdevs_discovered": 2, 00:16:51.265 "num_base_bdevs_operational": 2, 00:16:51.265 "base_bdevs_list": [ 00:16:51.265 { 00:16:51.265 "name": "spare", 00:16:51.265 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:51.265 "is_configured": true, 00:16:51.265 "data_offset": 256, 00:16:51.265 "data_size": 7936 00:16:51.265 }, 00:16:51.265 { 00:16:51.265 "name": "BaseBdev2", 00:16:51.265 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:51.265 "is_configured": true, 00:16:51.265 "data_offset": 256, 00:16:51.265 "data_size": 7936 00:16:51.265 } 00:16:51.265 ] 00:16:51.265 }' 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.265 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.524 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.524 "name": "raid_bdev1", 00:16:51.524 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:51.524 "strip_size_kb": 0, 00:16:51.524 "state": "online", 00:16:51.524 "raid_level": "raid1", 00:16:51.524 "superblock": true, 00:16:51.524 "num_base_bdevs": 2, 00:16:51.524 "num_base_bdevs_discovered": 2, 00:16:51.524 "num_base_bdevs_operational": 2, 00:16:51.525 "base_bdevs_list": [ 00:16:51.525 { 00:16:51.525 "name": "spare", 00:16:51.525 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:51.525 "is_configured": true, 00:16:51.525 "data_offset": 256, 00:16:51.525 "data_size": 7936 00:16:51.525 }, 00:16:51.525 { 00:16:51.525 "name": "BaseBdev2", 00:16:51.525 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:51.525 "is_configured": true, 00:16:51.525 "data_offset": 256, 00:16:51.525 "data_size": 7936 00:16:51.525 } 00:16:51.525 ] 00:16:51.525 }' 00:16:51.525 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.525 15:25:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.784 [2024-11-10 15:25:58.029920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.784 [2024-11-10 15:25:58.029953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.784 [2024-11-10 15:25:58.030058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.784 [2024-11-10 15:25:58.030129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.784 [2024-11-10 15:25:58.030145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:51.784 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:52.044 /dev/nbd0 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.044 1+0 records in 00:16:52.044 1+0 records out 00:16:52.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547201 s, 7.5 MB/s 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.044 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:52.304 /dev/nbd1 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # local i 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # break 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:52.304 1+0 records in 00:16:52.304 1+0 records out 00:16:52.304 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415073 s, 9.9 MB/s 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # size=4096 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # return 0 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:52.304 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:52.305 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.305 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:52.564 15:25:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.824 [2024-11-10 15:25:59.125912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:52.824 [2024-11-10 15:25:59.125971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.824 [2024-11-10 15:25:59.125996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:52.824 [2024-11-10 15:25:59.126005] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.824 [2024-11-10 15:25:59.128465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.824 [2024-11-10 15:25:59.128503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:52.824 [2024-11-10 15:25:59.128575] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:52.824 [2024-11-10 15:25:59.128621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:52.824 [2024-11-10 15:25:59.128751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.824 spare 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.824 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.084 [2024-11-10 15:25:59.228818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:53.084 [2024-11-10 15:25:59.228847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:53.084 [2024-11-10 15:25:59.229130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:16:53.084 [2024-11-10 15:25:59.229320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:53.084 [2024-11-10 15:25:59.229337] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:53.084 [2024-11-10 15:25:59.229461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.084 "name": "raid_bdev1", 00:16:53.084 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:53.084 "strip_size_kb": 0, 00:16:53.084 "state": "online", 00:16:53.084 "raid_level": "raid1", 00:16:53.084 "superblock": true, 00:16:53.084 "num_base_bdevs": 2, 00:16:53.084 "num_base_bdevs_discovered": 2, 00:16:53.084 "num_base_bdevs_operational": 2, 00:16:53.084 "base_bdevs_list": [ 00:16:53.084 { 00:16:53.084 "name": "spare", 00:16:53.084 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:53.084 "is_configured": true, 00:16:53.084 "data_offset": 256, 00:16:53.084 "data_size": 7936 00:16:53.084 }, 00:16:53.084 { 00:16:53.084 "name": "BaseBdev2", 00:16:53.084 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:53.084 "is_configured": true, 00:16:53.084 "data_offset": 256, 00:16:53.084 "data_size": 7936 00:16:53.084 } 00:16:53.084 ] 00:16:53.084 }' 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.084 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.361 "name": "raid_bdev1", 00:16:53.361 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:53.361 "strip_size_kb": 0, 00:16:53.361 "state": "online", 00:16:53.361 "raid_level": "raid1", 00:16:53.361 "superblock": true, 00:16:53.361 "num_base_bdevs": 2, 00:16:53.361 "num_base_bdevs_discovered": 2, 00:16:53.361 "num_base_bdevs_operational": 2, 00:16:53.361 "base_bdevs_list": [ 00:16:53.361 { 00:16:53.361 "name": "spare", 00:16:53.361 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:53.361 "is_configured": true, 00:16:53.361 "data_offset": 256, 00:16:53.361 "data_size": 7936 00:16:53.361 }, 00:16:53.361 { 00:16:53.361 "name": "BaseBdev2", 00:16:53.361 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:53.361 "is_configured": true, 00:16:53.361 "data_offset": 256, 00:16:53.361 "data_size": 7936 00:16:53.361 } 00:16:53.361 ] 00:16:53.361 }' 00:16:53.361 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:53.628 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.629 [2024-11-10 15:25:59.854163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.629 "name": "raid_bdev1", 00:16:53.629 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:53.629 "strip_size_kb": 0, 00:16:53.629 "state": "online", 00:16:53.629 "raid_level": "raid1", 00:16:53.629 "superblock": true, 00:16:53.629 "num_base_bdevs": 2, 00:16:53.629 "num_base_bdevs_discovered": 1, 00:16:53.629 "num_base_bdevs_operational": 1, 00:16:53.629 "base_bdevs_list": [ 00:16:53.629 { 00:16:53.629 "name": null, 00:16:53.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.629 "is_configured": false, 00:16:53.629 "data_offset": 0, 00:16:53.629 "data_size": 7936 00:16:53.629 }, 00:16:53.629 { 00:16:53.629 "name": "BaseBdev2", 00:16:53.629 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:53.629 "is_configured": true, 00:16:53.629 "data_offset": 256, 00:16:53.629 "data_size": 7936 00:16:53.629 } 00:16:53.629 ] 00:16:53.629 }' 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.629 15:25:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.198 15:26:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.198 15:26:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.198 15:26:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.198 [2024-11-10 15:26:00.322305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.198 [2024-11-10 15:26:00.322430] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.198 [2024-11-10 15:26:00.322450] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:54.198 [2024-11-10 15:26:00.322495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.198 [2024-11-10 15:26:00.331046] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:16:54.198 15:26:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.198 15:26:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:54.198 [2024-11-10 15:26:00.333225] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.136 "name": "raid_bdev1", 00:16:55.136 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:55.136 "strip_size_kb": 0, 00:16:55.136 "state": "online", 00:16:55.136 "raid_level": "raid1", 00:16:55.136 "superblock": true, 00:16:55.136 "num_base_bdevs": 2, 00:16:55.136 "num_base_bdevs_discovered": 2, 00:16:55.136 "num_base_bdevs_operational": 2, 00:16:55.136 "process": { 00:16:55.136 "type": "rebuild", 00:16:55.136 "target": "spare", 00:16:55.136 "progress": { 00:16:55.136 "blocks": 2560, 00:16:55.136 "percent": 32 00:16:55.136 } 00:16:55.136 }, 00:16:55.136 "base_bdevs_list": [ 00:16:55.136 { 00:16:55.136 "name": "spare", 00:16:55.136 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:55.136 "is_configured": true, 00:16:55.136 "data_offset": 256, 00:16:55.136 "data_size": 7936 00:16:55.136 }, 00:16:55.136 { 00:16:55.136 "name": "BaseBdev2", 00:16:55.136 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:55.136 "is_configured": true, 00:16:55.136 "data_offset": 256, 00:16:55.136 "data_size": 7936 00:16:55.136 } 00:16:55.136 ] 00:16:55.136 }' 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.136 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.136 [2024-11-10 15:26:01.488626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.396 [2024-11-10 15:26:01.542764] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.396 [2024-11-10 15:26:01.542825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.396 [2024-11-10 15:26:01.542840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.396 [2024-11-10 15:26:01.542849] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.396 "name": "raid_bdev1", 00:16:55.396 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:55.396 "strip_size_kb": 0, 00:16:55.396 "state": "online", 00:16:55.396 "raid_level": "raid1", 00:16:55.396 "superblock": true, 00:16:55.396 "num_base_bdevs": 2, 00:16:55.396 "num_base_bdevs_discovered": 1, 00:16:55.396 "num_base_bdevs_operational": 1, 00:16:55.396 "base_bdevs_list": [ 00:16:55.396 { 00:16:55.396 "name": null, 00:16:55.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.396 "is_configured": false, 00:16:55.396 "data_offset": 0, 00:16:55.396 "data_size": 7936 00:16:55.396 }, 00:16:55.396 { 00:16:55.396 "name": "BaseBdev2", 00:16:55.396 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:55.396 "is_configured": true, 00:16:55.396 "data_offset": 256, 00:16:55.396 "data_size": 7936 00:16:55.396 } 00:16:55.396 ] 00:16:55.396 }' 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.396 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.656 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:55.656 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.656 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.656 [2024-11-10 15:26:01.970661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:55.656 [2024-11-10 15:26:01.970725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:55.656 [2024-11-10 15:26:01.970747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:55.656 [2024-11-10 15:26:01.970759] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:55.656 [2024-11-10 15:26:01.971276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:55.657 [2024-11-10 15:26:01.971305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:55.657 [2024-11-10 15:26:01.971385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:55.657 [2024-11-10 15:26:01.971405] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:55.657 [2024-11-10 15:26:01.971416] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:55.657 [2024-11-10 15:26:01.971461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.657 [2024-11-10 15:26:01.978844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:16:55.657 spare 00:16:55.657 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.657 15:26:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:55.657 [2024-11-10 15:26:01.981099] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.038 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.038 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.038 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.038 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.039 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.039 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.039 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.039 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.039 15:26:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.039 "name": "raid_bdev1", 00:16:57.039 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:57.039 "strip_size_kb": 0, 00:16:57.039 "state": "online", 00:16:57.039 "raid_level": "raid1", 00:16:57.039 "superblock": true, 00:16:57.039 "num_base_bdevs": 2, 00:16:57.039 "num_base_bdevs_discovered": 2, 00:16:57.039 "num_base_bdevs_operational": 2, 00:16:57.039 "process": { 00:16:57.039 "type": "rebuild", 00:16:57.039 "target": "spare", 00:16:57.039 "progress": { 00:16:57.039 "blocks": 2560, 00:16:57.039 "percent": 32 00:16:57.039 } 00:16:57.039 }, 00:16:57.039 "base_bdevs_list": [ 00:16:57.039 { 00:16:57.039 "name": "spare", 00:16:57.039 "uuid": "5251dd6e-2e22-50e9-bc59-bf8e1ed421e2", 00:16:57.039 "is_configured": true, 00:16:57.039 "data_offset": 256, 00:16:57.039 "data_size": 7936 00:16:57.039 }, 00:16:57.039 { 00:16:57.039 "name": "BaseBdev2", 00:16:57.039 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:57.039 "is_configured": true, 00:16:57.039 "data_offset": 256, 00:16:57.039 "data_size": 7936 00:16:57.039 } 00:16:57.039 ] 00:16:57.039 }' 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.039 [2024-11-10 15:26:03.123109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.039 [2024-11-10 15:26:03.190675] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.039 [2024-11-10 15:26:03.190733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.039 [2024-11-10 15:26:03.190750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.039 [2024-11-10 15:26:03.190757] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.039 "name": "raid_bdev1", 00:16:57.039 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:57.039 "strip_size_kb": 0, 00:16:57.039 "state": "online", 00:16:57.039 "raid_level": "raid1", 00:16:57.039 "superblock": true, 00:16:57.039 "num_base_bdevs": 2, 00:16:57.039 "num_base_bdevs_discovered": 1, 00:16:57.039 "num_base_bdevs_operational": 1, 00:16:57.039 "base_bdevs_list": [ 00:16:57.039 { 00:16:57.039 "name": null, 00:16:57.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.039 "is_configured": false, 00:16:57.039 "data_offset": 0, 00:16:57.039 "data_size": 7936 00:16:57.039 }, 00:16:57.039 { 00:16:57.039 "name": "BaseBdev2", 00:16:57.039 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:57.039 "is_configured": true, 00:16:57.039 "data_offset": 256, 00:16:57.039 "data_size": 7936 00:16:57.039 } 00:16:57.039 ] 00:16:57.039 }' 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.039 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.609 "name": "raid_bdev1", 00:16:57.609 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:57.609 "strip_size_kb": 0, 00:16:57.609 "state": "online", 00:16:57.609 "raid_level": "raid1", 00:16:57.609 "superblock": true, 00:16:57.609 "num_base_bdevs": 2, 00:16:57.609 "num_base_bdevs_discovered": 1, 00:16:57.609 "num_base_bdevs_operational": 1, 00:16:57.609 "base_bdevs_list": [ 00:16:57.609 { 00:16:57.609 "name": null, 00:16:57.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.609 "is_configured": false, 00:16:57.609 "data_offset": 0, 00:16:57.609 "data_size": 7936 00:16:57.609 }, 00:16:57.609 { 00:16:57.609 "name": "BaseBdev2", 00:16:57.609 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:57.609 "is_configured": true, 00:16:57.609 "data_offset": 256, 00:16:57.609 "data_size": 7936 00:16:57.609 } 00:16:57.609 ] 00:16:57.609 }' 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.609 [2024-11-10 15:26:03.846492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:57.609 [2024-11-10 15:26:03.846544] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.609 [2024-11-10 15:26:03.846585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:57.609 [2024-11-10 15:26:03.846594] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.609 [2024-11-10 15:26:03.847069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.609 [2024-11-10 15:26:03.847093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:57.609 [2024-11-10 15:26:03.847178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:57.609 [2024-11-10 15:26:03.847193] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:57.609 [2024-11-10 15:26:03.847208] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:57.609 [2024-11-10 15:26:03.847219] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:57.609 BaseBdev1 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.609 15:26:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.548 "name": "raid_bdev1", 00:16:58.548 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:58.548 "strip_size_kb": 0, 00:16:58.548 "state": "online", 00:16:58.548 "raid_level": "raid1", 00:16:58.548 "superblock": true, 00:16:58.548 "num_base_bdevs": 2, 00:16:58.548 "num_base_bdevs_discovered": 1, 00:16:58.548 "num_base_bdevs_operational": 1, 00:16:58.548 "base_bdevs_list": [ 00:16:58.548 { 00:16:58.548 "name": null, 00:16:58.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.548 "is_configured": false, 00:16:58.548 "data_offset": 0, 00:16:58.548 "data_size": 7936 00:16:58.548 }, 00:16:58.548 { 00:16:58.548 "name": "BaseBdev2", 00:16:58.548 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:58.548 "is_configured": true, 00:16:58.548 "data_offset": 256, 00:16:58.548 "data_size": 7936 00:16:58.548 } 00:16:58.548 ] 00:16:58.548 }' 00:16:58.548 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.808 15:26:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.067 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:59.067 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.067 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:59.067 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.068 "name": "raid_bdev1", 00:16:59.068 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:16:59.068 "strip_size_kb": 0, 00:16:59.068 "state": "online", 00:16:59.068 "raid_level": "raid1", 00:16:59.068 "superblock": true, 00:16:59.068 "num_base_bdevs": 2, 00:16:59.068 "num_base_bdevs_discovered": 1, 00:16:59.068 "num_base_bdevs_operational": 1, 00:16:59.068 "base_bdevs_list": [ 00:16:59.068 { 00:16:59.068 "name": null, 00:16:59.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.068 "is_configured": false, 00:16:59.068 "data_offset": 0, 00:16:59.068 "data_size": 7936 00:16:59.068 }, 00:16:59.068 { 00:16:59.068 "name": "BaseBdev2", 00:16:59.068 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:16:59.068 "is_configured": true, 00:16:59.068 "data_offset": 256, 00:16:59.068 "data_size": 7936 00:16:59.068 } 00:16:59.068 ] 00:16:59.068 }' 00:16:59.068 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.327 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.327 [2024-11-10 15:26:05.514940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.327 [2024-11-10 15:26:05.515096] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:59.327 [2024-11-10 15:26:05.515117] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:59.327 request: 00:16:59.327 { 00:16:59.327 "base_bdev": "BaseBdev1", 00:16:59.328 "raid_bdev": "raid_bdev1", 00:16:59.328 "method": "bdev_raid_add_base_bdev", 00:16:59.328 "req_id": 1 00:16:59.328 } 00:16:59.328 Got JSON-RPC error response 00:16:59.328 response: 00:16:59.328 { 00:16:59.328 "code": -22, 00:16:59.328 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:59.328 } 00:16:59.328 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:59.328 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:16:59.328 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.328 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.328 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.328 15:26:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.267 "name": "raid_bdev1", 00:17:00.267 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:17:00.267 "strip_size_kb": 0, 00:17:00.267 "state": "online", 00:17:00.267 "raid_level": "raid1", 00:17:00.267 "superblock": true, 00:17:00.267 "num_base_bdevs": 2, 00:17:00.267 "num_base_bdevs_discovered": 1, 00:17:00.267 "num_base_bdevs_operational": 1, 00:17:00.267 "base_bdevs_list": [ 00:17:00.267 { 00:17:00.267 "name": null, 00:17:00.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.267 "is_configured": false, 00:17:00.267 "data_offset": 0, 00:17:00.267 "data_size": 7936 00:17:00.267 }, 00:17:00.267 { 00:17:00.267 "name": "BaseBdev2", 00:17:00.267 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:17:00.267 "is_configured": true, 00:17:00.267 "data_offset": 256, 00:17:00.267 "data_size": 7936 00:17:00.267 } 00:17:00.267 ] 00:17:00.267 }' 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.267 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.837 15:26:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.837 "name": "raid_bdev1", 00:17:00.837 "uuid": "1a62a237-d227-4553-9ebc-5cbf49f315e5", 00:17:00.837 "strip_size_kb": 0, 00:17:00.837 "state": "online", 00:17:00.837 "raid_level": "raid1", 00:17:00.837 "superblock": true, 00:17:00.837 "num_base_bdevs": 2, 00:17:00.837 "num_base_bdevs_discovered": 1, 00:17:00.837 "num_base_bdevs_operational": 1, 00:17:00.837 "base_bdevs_list": [ 00:17:00.837 { 00:17:00.837 "name": null, 00:17:00.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.837 "is_configured": false, 00:17:00.837 "data_offset": 0, 00:17:00.837 "data_size": 7936 00:17:00.837 }, 00:17:00.837 { 00:17:00.837 "name": "BaseBdev2", 00:17:00.837 "uuid": "5a9d83b3-9fd0-5ec2-a60f-f783a0e9c719", 00:17:00.837 "is_configured": true, 00:17:00.837 "data_offset": 256, 00:17:00.837 "data_size": 7936 00:17:00.837 } 00:17:00.837 ] 00:17:00.837 }' 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 98263 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@952 -- # '[' -z 98263 ']' 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # kill -0 98263 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # uname 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98263 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:00.837 killing process with pid 98263 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98263' 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@971 -- # kill 98263 00:17:00.837 Received shutdown signal, test time was about 60.000000 seconds 00:17:00.837 00:17:00.837 Latency(us) 00:17:00.837 [2024-11-10T15:26:07.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.837 [2024-11-10T15:26:07.200Z] =================================================================================================================== 00:17:00.837 [2024-11-10T15:26:07.200Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.837 [2024-11-10 15:26:07.168637] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:00.837 [2024-11-10 15:26:07.168763] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.837 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@976 -- # wait 98263 00:17:00.837 [2024-11-10 15:26:07.168815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.837 [2024-11-10 15:26:07.168828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:01.097 [2024-11-10 15:26:07.226518] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:01.384 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:01.384 00:17:01.384 real 0m18.547s 00:17:01.384 user 0m24.468s 00:17:01.384 sys 0m2.802s 00:17:01.384 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:01.384 15:26:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.384 ************************************ 00:17:01.384 END TEST raid_rebuild_test_sb_4k 00:17:01.384 ************************************ 00:17:01.384 15:26:07 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:01.384 15:26:07 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:01.384 15:26:07 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:01.384 15:26:07 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:01.384 15:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:01.384 ************************************ 00:17:01.384 START TEST raid_state_function_test_sb_md_separate 00:17:01.384 ************************************ 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=98947 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:01.384 Process raid pid: 98947 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98947' 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 98947 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 98947 ']' 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:01.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:01.384 15:26:07 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.384 [2024-11-10 15:26:07.729556] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:01.384 [2024-11-10 15:26:07.729682] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.644 [2024-11-10 15:26:07.869428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:01.644 [2024-11-10 15:26:07.906947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.644 [2024-11-10 15:26:07.946751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:01.903 [2024-11-10 15:26:08.023859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:01.903 [2024-11-10 15:26:08.023897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.471 [2024-11-10 15:26:08.564696] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.471 [2024-11-10 15:26:08.564751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.471 [2024-11-10 15:26:08.564763] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.471 [2024-11-10 15:26:08.564782] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.471 "name": "Existed_Raid", 00:17:02.471 "uuid": "6ecfaf99-847c-43ba-8a92-b95fe8f0f34f", 00:17:02.471 "strip_size_kb": 0, 00:17:02.471 "state": "configuring", 00:17:02.471 "raid_level": "raid1", 00:17:02.471 "superblock": true, 00:17:02.471 "num_base_bdevs": 2, 00:17:02.471 "num_base_bdevs_discovered": 0, 00:17:02.471 "num_base_bdevs_operational": 2, 00:17:02.471 "base_bdevs_list": [ 00:17:02.471 { 00:17:02.471 "name": "BaseBdev1", 00:17:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.471 "is_configured": false, 00:17:02.471 "data_offset": 0, 00:17:02.471 "data_size": 0 00:17:02.471 }, 00:17:02.471 { 00:17:02.471 "name": "BaseBdev2", 00:17:02.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.471 "is_configured": false, 00:17:02.471 "data_offset": 0, 00:17:02.471 "data_size": 0 00:17:02.471 } 00:17:02.471 ] 00:17:02.471 }' 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.471 15:26:08 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 [2024-11-10 15:26:09.008690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:02.730 [2024-11-10 15:26:09.008729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 [2024-11-10 15:26:09.020724] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:02.730 [2024-11-10 15:26:09.020771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:02.730 [2024-11-10 15:26:09.020782] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:02.730 [2024-11-10 15:26:09.020789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 [2024-11-10 15:26:09.048830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:02.730 BaseBdev1 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 [ 00:17:02.730 { 00:17:02.730 "name": "BaseBdev1", 00:17:02.730 "aliases": [ 00:17:02.730 "041e118e-a593-4aef-95f5-ea77c93a2575" 00:17:02.730 ], 00:17:02.730 "product_name": "Malloc disk", 00:17:02.730 "block_size": 4096, 00:17:02.730 "num_blocks": 8192, 00:17:02.730 "uuid": "041e118e-a593-4aef-95f5-ea77c93a2575", 00:17:02.730 "md_size": 32, 00:17:02.730 "md_interleave": false, 00:17:02.730 "dif_type": 0, 00:17:02.730 "assigned_rate_limits": { 00:17:02.730 "rw_ios_per_sec": 0, 00:17:02.730 "rw_mbytes_per_sec": 0, 00:17:02.730 "r_mbytes_per_sec": 0, 00:17:02.730 "w_mbytes_per_sec": 0 00:17:02.730 }, 00:17:02.730 "claimed": true, 00:17:02.730 "claim_type": "exclusive_write", 00:17:02.730 "zoned": false, 00:17:02.730 "supported_io_types": { 00:17:02.730 "read": true, 00:17:02.730 "write": true, 00:17:02.730 "unmap": true, 00:17:02.730 "flush": true, 00:17:02.730 "reset": true, 00:17:02.730 "nvme_admin": false, 00:17:02.730 "nvme_io": false, 00:17:02.730 "nvme_io_md": false, 00:17:02.730 "write_zeroes": true, 00:17:02.730 "zcopy": true, 00:17:02.730 "get_zone_info": false, 00:17:02.730 "zone_management": false, 00:17:02.730 "zone_append": false, 00:17:02.730 "compare": false, 00:17:02.730 "compare_and_write": false, 00:17:02.730 "abort": true, 00:17:02.730 "seek_hole": false, 00:17:02.730 "seek_data": false, 00:17:02.730 "copy": true, 00:17:02.730 "nvme_iov_md": false 00:17:02.730 }, 00:17:02.730 "memory_domains": [ 00:17:02.730 { 00:17:02.730 "dma_device_id": "system", 00:17:02.730 "dma_device_type": 1 00:17:02.730 }, 00:17:02.730 { 00:17:02.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.730 "dma_device_type": 2 00:17:02.730 } 00:17:02.730 ], 00:17:02.730 "driver_specific": {} 00:17:02.730 } 00:17:02.730 ] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.730 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.989 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.989 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.989 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.989 "name": "Existed_Raid", 00:17:02.989 "uuid": "9e769315-ce79-478e-9430-35b4d037dbca", 00:17:02.989 "strip_size_kb": 0, 00:17:02.989 "state": "configuring", 00:17:02.989 "raid_level": "raid1", 00:17:02.989 "superblock": true, 00:17:02.989 "num_base_bdevs": 2, 00:17:02.989 "num_base_bdevs_discovered": 1, 00:17:02.989 "num_base_bdevs_operational": 2, 00:17:02.989 "base_bdevs_list": [ 00:17:02.989 { 00:17:02.989 "name": "BaseBdev1", 00:17:02.989 "uuid": "041e118e-a593-4aef-95f5-ea77c93a2575", 00:17:02.989 "is_configured": true, 00:17:02.989 "data_offset": 256, 00:17:02.989 "data_size": 7936 00:17:02.989 }, 00:17:02.989 { 00:17:02.989 "name": "BaseBdev2", 00:17:02.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.989 "is_configured": false, 00:17:02.989 "data_offset": 0, 00:17:02.989 "data_size": 0 00:17:02.989 } 00:17:02.989 ] 00:17:02.989 }' 00:17:02.989 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.989 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.248 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 [2024-11-10 15:26:09.485000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:03.249 [2024-11-10 15:26:09.485063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 [2024-11-10 15:26:09.497076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:03.249 [2024-11-10 15:26:09.499164] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.249 [2024-11-10 15:26:09.499201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.249 "name": "Existed_Raid", 00:17:03.249 "uuid": "c2ec5b5d-00b8-4155-bbc4-2855c206bacb", 00:17:03.249 "strip_size_kb": 0, 00:17:03.249 "state": "configuring", 00:17:03.249 "raid_level": "raid1", 00:17:03.249 "superblock": true, 00:17:03.249 "num_base_bdevs": 2, 00:17:03.249 "num_base_bdevs_discovered": 1, 00:17:03.249 "num_base_bdevs_operational": 2, 00:17:03.249 "base_bdevs_list": [ 00:17:03.249 { 00:17:03.249 "name": "BaseBdev1", 00:17:03.249 "uuid": "041e118e-a593-4aef-95f5-ea77c93a2575", 00:17:03.249 "is_configured": true, 00:17:03.249 "data_offset": 256, 00:17:03.249 "data_size": 7936 00:17:03.249 }, 00:17:03.249 { 00:17:03.249 "name": "BaseBdev2", 00:17:03.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.249 "is_configured": false, 00:17:03.249 "data_offset": 0, 00:17:03.249 "data_size": 0 00:17:03.249 } 00:17:03.249 ] 00:17:03.249 }' 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.249 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.818 [2024-11-10 15:26:09.983005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.818 [2024-11-10 15:26:09.983192] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:03.818 [2024-11-10 15:26:09.983209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.818 [2024-11-10 15:26:09.983309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:03.818 [2024-11-10 15:26:09.983420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:03.818 [2024-11-10 15:26:09.983452] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:17:03.818 [2024-11-10 15:26:09.983580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.818 BaseBdev2 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local i 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.818 15:26:09 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.818 [ 00:17:03.818 { 00:17:03.818 "name": "BaseBdev2", 00:17:03.818 "aliases": [ 00:17:03.818 "cd4ee4fc-7b9e-4bfe-bcfb-98aa0692e7fc" 00:17:03.818 ], 00:17:03.818 "product_name": "Malloc disk", 00:17:03.818 "block_size": 4096, 00:17:03.818 "num_blocks": 8192, 00:17:03.818 "uuid": "cd4ee4fc-7b9e-4bfe-bcfb-98aa0692e7fc", 00:17:03.818 "md_size": 32, 00:17:03.818 "md_interleave": false, 00:17:03.818 "dif_type": 0, 00:17:03.818 "assigned_rate_limits": { 00:17:03.818 "rw_ios_per_sec": 0, 00:17:03.818 "rw_mbytes_per_sec": 0, 00:17:03.818 "r_mbytes_per_sec": 0, 00:17:03.818 "w_mbytes_per_sec": 0 00:17:03.818 }, 00:17:03.818 "claimed": true, 00:17:03.818 "claim_type": "exclusive_write", 00:17:03.818 "zoned": false, 00:17:03.818 "supported_io_types": { 00:17:03.818 "read": true, 00:17:03.818 "write": true, 00:17:03.818 "unmap": true, 00:17:03.818 "flush": true, 00:17:03.818 "reset": true, 00:17:03.818 "nvme_admin": false, 00:17:03.818 "nvme_io": false, 00:17:03.818 "nvme_io_md": false, 00:17:03.818 "write_zeroes": true, 00:17:03.818 "zcopy": true, 00:17:03.818 "get_zone_info": false, 00:17:03.818 "zone_management": false, 00:17:03.818 "zone_append": false, 00:17:03.818 "compare": false, 00:17:03.818 "compare_and_write": false, 00:17:03.818 "abort": true, 00:17:03.818 "seek_hole": false, 00:17:03.818 "seek_data": false, 00:17:03.818 "copy": true, 00:17:03.818 "nvme_iov_md": false 00:17:03.818 }, 00:17:03.818 "memory_domains": [ 00:17:03.818 { 00:17:03.818 "dma_device_id": "system", 00:17:03.818 "dma_device_type": 1 00:17:03.818 }, 00:17:03.818 { 00:17:03.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.818 "dma_device_type": 2 00:17:03.818 } 00:17:03.818 ], 00:17:03.818 "driver_specific": {} 00:17:03.818 } 00:17:03.818 ] 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@909 -- # return 0 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.818 "name": "Existed_Raid", 00:17:03.818 "uuid": "c2ec5b5d-00b8-4155-bbc4-2855c206bacb", 00:17:03.818 "strip_size_kb": 0, 00:17:03.818 "state": "online", 00:17:03.818 "raid_level": "raid1", 00:17:03.818 "superblock": true, 00:17:03.818 "num_base_bdevs": 2, 00:17:03.818 "num_base_bdevs_discovered": 2, 00:17:03.818 "num_base_bdevs_operational": 2, 00:17:03.818 "base_bdevs_list": [ 00:17:03.818 { 00:17:03.818 "name": "BaseBdev1", 00:17:03.818 "uuid": "041e118e-a593-4aef-95f5-ea77c93a2575", 00:17:03.818 "is_configured": true, 00:17:03.818 "data_offset": 256, 00:17:03.818 "data_size": 7936 00:17:03.818 }, 00:17:03.818 { 00:17:03.818 "name": "BaseBdev2", 00:17:03.818 "uuid": "cd4ee4fc-7b9e-4bfe-bcfb-98aa0692e7fc", 00:17:03.818 "is_configured": true, 00:17:03.818 "data_offset": 256, 00:17:03.818 "data_size": 7936 00:17:03.818 } 00:17:03.818 ] 00:17:03.818 }' 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.818 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.078 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:04.078 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.079 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.079 [2024-11-10 15:26:10.427440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:04.339 "name": "Existed_Raid", 00:17:04.339 "aliases": [ 00:17:04.339 "c2ec5b5d-00b8-4155-bbc4-2855c206bacb" 00:17:04.339 ], 00:17:04.339 "product_name": "Raid Volume", 00:17:04.339 "block_size": 4096, 00:17:04.339 "num_blocks": 7936, 00:17:04.339 "uuid": "c2ec5b5d-00b8-4155-bbc4-2855c206bacb", 00:17:04.339 "md_size": 32, 00:17:04.339 "md_interleave": false, 00:17:04.339 "dif_type": 0, 00:17:04.339 "assigned_rate_limits": { 00:17:04.339 "rw_ios_per_sec": 0, 00:17:04.339 "rw_mbytes_per_sec": 0, 00:17:04.339 "r_mbytes_per_sec": 0, 00:17:04.339 "w_mbytes_per_sec": 0 00:17:04.339 }, 00:17:04.339 "claimed": false, 00:17:04.339 "zoned": false, 00:17:04.339 "supported_io_types": { 00:17:04.339 "read": true, 00:17:04.339 "write": true, 00:17:04.339 "unmap": false, 00:17:04.339 "flush": false, 00:17:04.339 "reset": true, 00:17:04.339 "nvme_admin": false, 00:17:04.339 "nvme_io": false, 00:17:04.339 "nvme_io_md": false, 00:17:04.339 "write_zeroes": true, 00:17:04.339 "zcopy": false, 00:17:04.339 "get_zone_info": false, 00:17:04.339 "zone_management": false, 00:17:04.339 "zone_append": false, 00:17:04.339 "compare": false, 00:17:04.339 "compare_and_write": false, 00:17:04.339 "abort": false, 00:17:04.339 "seek_hole": false, 00:17:04.339 "seek_data": false, 00:17:04.339 "copy": false, 00:17:04.339 "nvme_iov_md": false 00:17:04.339 }, 00:17:04.339 "memory_domains": [ 00:17:04.339 { 00:17:04.339 "dma_device_id": "system", 00:17:04.339 "dma_device_type": 1 00:17:04.339 }, 00:17:04.339 { 00:17:04.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.339 "dma_device_type": 2 00:17:04.339 }, 00:17:04.339 { 00:17:04.339 "dma_device_id": "system", 00:17:04.339 "dma_device_type": 1 00:17:04.339 }, 00:17:04.339 { 00:17:04.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.339 "dma_device_type": 2 00:17:04.339 } 00:17:04.339 ], 00:17:04.339 "driver_specific": { 00:17:04.339 "raid": { 00:17:04.339 "uuid": "c2ec5b5d-00b8-4155-bbc4-2855c206bacb", 00:17:04.339 "strip_size_kb": 0, 00:17:04.339 "state": "online", 00:17:04.339 "raid_level": "raid1", 00:17:04.339 "superblock": true, 00:17:04.339 "num_base_bdevs": 2, 00:17:04.339 "num_base_bdevs_discovered": 2, 00:17:04.339 "num_base_bdevs_operational": 2, 00:17:04.339 "base_bdevs_list": [ 00:17:04.339 { 00:17:04.339 "name": "BaseBdev1", 00:17:04.339 "uuid": "041e118e-a593-4aef-95f5-ea77c93a2575", 00:17:04.339 "is_configured": true, 00:17:04.339 "data_offset": 256, 00:17:04.339 "data_size": 7936 00:17:04.339 }, 00:17:04.339 { 00:17:04.339 "name": "BaseBdev2", 00:17:04.339 "uuid": "cd4ee4fc-7b9e-4bfe-bcfb-98aa0692e7fc", 00:17:04.339 "is_configured": true, 00:17:04.339 "data_offset": 256, 00:17:04.339 "data_size": 7936 00:17:04.339 } 00:17:04.339 ] 00:17:04.339 } 00:17:04.339 } 00:17:04.339 }' 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:04.339 BaseBdev2' 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.339 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.340 [2024-11-10 15:26:10.647305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.340 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.599 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.599 "name": "Existed_Raid", 00:17:04.599 "uuid": "c2ec5b5d-00b8-4155-bbc4-2855c206bacb", 00:17:04.599 "strip_size_kb": 0, 00:17:04.599 "state": "online", 00:17:04.599 "raid_level": "raid1", 00:17:04.599 "superblock": true, 00:17:04.599 "num_base_bdevs": 2, 00:17:04.599 "num_base_bdevs_discovered": 1, 00:17:04.599 "num_base_bdevs_operational": 1, 00:17:04.599 "base_bdevs_list": [ 00:17:04.599 { 00:17:04.599 "name": null, 00:17:04.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.599 "is_configured": false, 00:17:04.599 "data_offset": 0, 00:17:04.599 "data_size": 7936 00:17:04.599 }, 00:17:04.599 { 00:17:04.599 "name": "BaseBdev2", 00:17:04.599 "uuid": "cd4ee4fc-7b9e-4bfe-bcfb-98aa0692e7fc", 00:17:04.599 "is_configured": true, 00:17:04.599 "data_offset": 256, 00:17:04.599 "data_size": 7936 00:17:04.599 } 00:17:04.599 ] 00:17:04.599 }' 00:17:04.599 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.599 15:26:10 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.859 [2024-11-10 15:26:11.169357] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.859 [2024-11-10 15:26:11.169462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.859 [2024-11-10 15:26:11.190876] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.859 [2024-11-10 15:26:11.190934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:04.859 [2024-11-10 15:26:11.190951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.859 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.860 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.860 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:04.860 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.860 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.860 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.119 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:05.119 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:05.119 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 98947 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 98947 ']' 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 98947 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 98947 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.120 killing process with pid 98947 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 98947' 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 98947 00:17:05.120 [2024-11-10 15:26:11.289918] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.120 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 98947 00:17:05.120 [2024-11-10 15:26:11.291510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:05.380 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:05.380 00:17:05.380 real 0m4.000s 00:17:05.380 user 0m6.074s 00:17:05.380 sys 0m0.968s 00:17:05.380 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:05.380 15:26:11 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.380 ************************************ 00:17:05.380 END TEST raid_state_function_test_sb_md_separate 00:17:05.380 ************************************ 00:17:05.380 15:26:11 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:05.380 15:26:11 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:05.380 15:26:11 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:05.380 15:26:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:05.380 ************************************ 00:17:05.380 START TEST raid_superblock_test_md_separate 00:17:05.381 ************************************ 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=99185 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 99185 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # '[' -z 99185 ']' 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:05.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:05.381 15:26:11 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.641 [2024-11-10 15:26:11.808248] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:05.641 [2024-11-10 15:26:11.808389] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99185 ] 00:17:05.641 [2024-11-10 15:26:11.947991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:05.641 [2024-11-10 15:26:11.985148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.900 [2024-11-10 15:26:12.026889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.900 [2024-11-10 15:26:12.104104] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.900 [2024-11-10 15:26:12.104145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@866 -- # return 0 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.469 malloc1 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.469 [2024-11-10 15:26:12.644831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.469 [2024-11-10 15:26:12.644925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.469 [2024-11-10 15:26:12.644952] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:06.469 [2024-11-10 15:26:12.644962] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.469 [2024-11-10 15:26:12.647236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.469 [2024-11-10 15:26:12.647273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.469 pt1 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.469 malloc2 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.469 [2024-11-10 15:26:12.680758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.469 [2024-11-10 15:26:12.680810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.469 [2024-11-10 15:26:12.680829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:06.469 [2024-11-10 15:26:12.680838] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.469 [2024-11-10 15:26:12.683061] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.469 [2024-11-10 15:26:12.683092] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.469 pt2 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.469 [2024-11-10 15:26:12.692796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.469 [2024-11-10 15:26:12.694933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.469 [2024-11-10 15:26:12.695096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:06.469 [2024-11-10 15:26:12.695111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.469 [2024-11-10 15:26:12.695214] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:06.469 [2024-11-10 15:26:12.695346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:06.469 [2024-11-10 15:26:12.695364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:06.469 [2024-11-10 15:26:12.695489] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.469 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.470 "name": "raid_bdev1", 00:17:06.470 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:06.470 "strip_size_kb": 0, 00:17:06.470 "state": "online", 00:17:06.470 "raid_level": "raid1", 00:17:06.470 "superblock": true, 00:17:06.470 "num_base_bdevs": 2, 00:17:06.470 "num_base_bdevs_discovered": 2, 00:17:06.470 "num_base_bdevs_operational": 2, 00:17:06.470 "base_bdevs_list": [ 00:17:06.470 { 00:17:06.470 "name": "pt1", 00:17:06.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 256, 00:17:06.470 "data_size": 7936 00:17:06.470 }, 00:17:06.470 { 00:17:06.470 "name": "pt2", 00:17:06.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.470 "is_configured": true, 00:17:06.470 "data_offset": 256, 00:17:06.470 "data_size": 7936 00:17:06.470 } 00:17:06.470 ] 00:17:06.470 }' 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.470 15:26:12 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.038 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.038 [2024-11-10 15:26:13.157233] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.039 "name": "raid_bdev1", 00:17:07.039 "aliases": [ 00:17:07.039 "bfaf18bb-a273-4163-8f55-51eee57b013f" 00:17:07.039 ], 00:17:07.039 "product_name": "Raid Volume", 00:17:07.039 "block_size": 4096, 00:17:07.039 "num_blocks": 7936, 00:17:07.039 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:07.039 "md_size": 32, 00:17:07.039 "md_interleave": false, 00:17:07.039 "dif_type": 0, 00:17:07.039 "assigned_rate_limits": { 00:17:07.039 "rw_ios_per_sec": 0, 00:17:07.039 "rw_mbytes_per_sec": 0, 00:17:07.039 "r_mbytes_per_sec": 0, 00:17:07.039 "w_mbytes_per_sec": 0 00:17:07.039 }, 00:17:07.039 "claimed": false, 00:17:07.039 "zoned": false, 00:17:07.039 "supported_io_types": { 00:17:07.039 "read": true, 00:17:07.039 "write": true, 00:17:07.039 "unmap": false, 00:17:07.039 "flush": false, 00:17:07.039 "reset": true, 00:17:07.039 "nvme_admin": false, 00:17:07.039 "nvme_io": false, 00:17:07.039 "nvme_io_md": false, 00:17:07.039 "write_zeroes": true, 00:17:07.039 "zcopy": false, 00:17:07.039 "get_zone_info": false, 00:17:07.039 "zone_management": false, 00:17:07.039 "zone_append": false, 00:17:07.039 "compare": false, 00:17:07.039 "compare_and_write": false, 00:17:07.039 "abort": false, 00:17:07.039 "seek_hole": false, 00:17:07.039 "seek_data": false, 00:17:07.039 "copy": false, 00:17:07.039 "nvme_iov_md": false 00:17:07.039 }, 00:17:07.039 "memory_domains": [ 00:17:07.039 { 00:17:07.039 "dma_device_id": "system", 00:17:07.039 "dma_device_type": 1 00:17:07.039 }, 00:17:07.039 { 00:17:07.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.039 "dma_device_type": 2 00:17:07.039 }, 00:17:07.039 { 00:17:07.039 "dma_device_id": "system", 00:17:07.039 "dma_device_type": 1 00:17:07.039 }, 00:17:07.039 { 00:17:07.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.039 "dma_device_type": 2 00:17:07.039 } 00:17:07.039 ], 00:17:07.039 "driver_specific": { 00:17:07.039 "raid": { 00:17:07.039 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:07.039 "strip_size_kb": 0, 00:17:07.039 "state": "online", 00:17:07.039 "raid_level": "raid1", 00:17:07.039 "superblock": true, 00:17:07.039 "num_base_bdevs": 2, 00:17:07.039 "num_base_bdevs_discovered": 2, 00:17:07.039 "num_base_bdevs_operational": 2, 00:17:07.039 "base_bdevs_list": [ 00:17:07.039 { 00:17:07.039 "name": "pt1", 00:17:07.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.039 "is_configured": true, 00:17:07.039 "data_offset": 256, 00:17:07.039 "data_size": 7936 00:17:07.039 }, 00:17:07.039 { 00:17:07.039 "name": "pt2", 00:17:07.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.039 "is_configured": true, 00:17:07.039 "data_offset": 256, 00:17:07.039 "data_size": 7936 00:17:07.039 } 00:17:07.039 ] 00:17:07.039 } 00:17:07.039 } 00:17:07.039 }' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.039 pt2' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.039 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.039 [2024-11-10 15:26:13.381199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bfaf18bb-a273-4163-8f55-51eee57b013f 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z bfaf18bb-a273-4163-8f55-51eee57b013f ']' 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.299 [2024-11-10 15:26:13.420972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.299 [2024-11-10 15:26:13.421001] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.299 [2024-11-10 15:26:13.421111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.299 [2024-11-10 15:26:13.421160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.299 [2024-11-10 15:26:13.421173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.299 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 [2024-11-10 15:26:13.561017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:07.300 [2024-11-10 15:26:13.563170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:07.300 [2024-11-10 15:26:13.563224] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:07.300 [2024-11-10 15:26:13.563273] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:07.300 [2024-11-10 15:26:13.563288] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.300 [2024-11-10 15:26:13.563297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:17:07.300 request: 00:17:07.300 { 00:17:07.300 "name": "raid_bdev1", 00:17:07.300 "raid_level": "raid1", 00:17:07.300 "base_bdevs": [ 00:17:07.300 "malloc1", 00:17:07.300 "malloc2" 00:17:07.300 ], 00:17:07.300 "superblock": false, 00:17:07.300 "method": "bdev_raid_create", 00:17:07.300 "req_id": 1 00:17:07.300 } 00:17:07.300 Got JSON-RPC error response 00:17:07.300 response: 00:17:07.300 { 00:17:07.300 "code": -17, 00:17:07.300 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:07.300 } 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 [2024-11-10 15:26:13.625007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:07.300 [2024-11-10 15:26:13.625062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.300 [2024-11-10 15:26:13.625081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:07.300 [2024-11-10 15:26:13.625094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.300 [2024-11-10 15:26:13.627260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.300 [2024-11-10 15:26:13.627293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:07.300 [2024-11-10 15:26:13.627331] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:07.300 [2024-11-10 15:26:13.627361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:07.300 pt1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.300 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.560 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.560 "name": "raid_bdev1", 00:17:07.560 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:07.560 "strip_size_kb": 0, 00:17:07.560 "state": "configuring", 00:17:07.560 "raid_level": "raid1", 00:17:07.560 "superblock": true, 00:17:07.560 "num_base_bdevs": 2, 00:17:07.560 "num_base_bdevs_discovered": 1, 00:17:07.560 "num_base_bdevs_operational": 2, 00:17:07.560 "base_bdevs_list": [ 00:17:07.560 { 00:17:07.560 "name": "pt1", 00:17:07.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.560 "is_configured": true, 00:17:07.560 "data_offset": 256, 00:17:07.560 "data_size": 7936 00:17:07.560 }, 00:17:07.560 { 00:17:07.560 "name": null, 00:17:07.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.560 "is_configured": false, 00:17:07.560 "data_offset": 256, 00:17:07.560 "data_size": 7936 00:17:07.560 } 00:17:07.560 ] 00:17:07.560 }' 00:17:07.560 15:26:13 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.560 15:26:13 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.838 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:07.838 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:07.838 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.838 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.838 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.838 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.838 [2024-11-10 15:26:14.061131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.838 [2024-11-10 15:26:14.061188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.838 [2024-11-10 15:26:14.061207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:07.838 [2024-11-10 15:26:14.061217] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.838 [2024-11-10 15:26:14.061386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.838 [2024-11-10 15:26:14.061424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.838 [2024-11-10 15:26:14.061467] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.838 [2024-11-10 15:26:14.061491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.838 [2024-11-10 15:26:14.061569] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:07.838 [2024-11-10 15:26:14.061581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.838 [2024-11-10 15:26:14.061646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:07.838 [2024-11-10 15:26:14.061743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:07.839 [2024-11-10 15:26:14.061764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:07.839 [2024-11-10 15:26:14.061830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.839 pt2 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.839 "name": "raid_bdev1", 00:17:07.839 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:07.839 "strip_size_kb": 0, 00:17:07.839 "state": "online", 00:17:07.839 "raid_level": "raid1", 00:17:07.839 "superblock": true, 00:17:07.839 "num_base_bdevs": 2, 00:17:07.839 "num_base_bdevs_discovered": 2, 00:17:07.839 "num_base_bdevs_operational": 2, 00:17:07.839 "base_bdevs_list": [ 00:17:07.839 { 00:17:07.839 "name": "pt1", 00:17:07.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.839 "is_configured": true, 00:17:07.839 "data_offset": 256, 00:17:07.839 "data_size": 7936 00:17:07.839 }, 00:17:07.839 { 00:17:07.839 "name": "pt2", 00:17:07.839 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.839 "is_configured": true, 00:17:07.839 "data_offset": 256, 00:17:07.839 "data_size": 7936 00:17:07.839 } 00:17:07.839 ] 00:17:07.839 }' 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.839 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.423 [2024-11-10 15:26:14.513479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.423 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:08.423 "name": "raid_bdev1", 00:17:08.423 "aliases": [ 00:17:08.423 "bfaf18bb-a273-4163-8f55-51eee57b013f" 00:17:08.423 ], 00:17:08.423 "product_name": "Raid Volume", 00:17:08.423 "block_size": 4096, 00:17:08.423 "num_blocks": 7936, 00:17:08.423 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:08.423 "md_size": 32, 00:17:08.423 "md_interleave": false, 00:17:08.423 "dif_type": 0, 00:17:08.423 "assigned_rate_limits": { 00:17:08.423 "rw_ios_per_sec": 0, 00:17:08.423 "rw_mbytes_per_sec": 0, 00:17:08.423 "r_mbytes_per_sec": 0, 00:17:08.423 "w_mbytes_per_sec": 0 00:17:08.423 }, 00:17:08.423 "claimed": false, 00:17:08.423 "zoned": false, 00:17:08.423 "supported_io_types": { 00:17:08.423 "read": true, 00:17:08.423 "write": true, 00:17:08.423 "unmap": false, 00:17:08.423 "flush": false, 00:17:08.423 "reset": true, 00:17:08.423 "nvme_admin": false, 00:17:08.423 "nvme_io": false, 00:17:08.423 "nvme_io_md": false, 00:17:08.423 "write_zeroes": true, 00:17:08.423 "zcopy": false, 00:17:08.423 "get_zone_info": false, 00:17:08.423 "zone_management": false, 00:17:08.423 "zone_append": false, 00:17:08.423 "compare": false, 00:17:08.424 "compare_and_write": false, 00:17:08.424 "abort": false, 00:17:08.424 "seek_hole": false, 00:17:08.424 "seek_data": false, 00:17:08.424 "copy": false, 00:17:08.424 "nvme_iov_md": false 00:17:08.424 }, 00:17:08.424 "memory_domains": [ 00:17:08.424 { 00:17:08.424 "dma_device_id": "system", 00:17:08.424 "dma_device_type": 1 00:17:08.424 }, 00:17:08.424 { 00:17:08.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.424 "dma_device_type": 2 00:17:08.424 }, 00:17:08.424 { 00:17:08.424 "dma_device_id": "system", 00:17:08.424 "dma_device_type": 1 00:17:08.424 }, 00:17:08.424 { 00:17:08.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.424 "dma_device_type": 2 00:17:08.424 } 00:17:08.424 ], 00:17:08.424 "driver_specific": { 00:17:08.424 "raid": { 00:17:08.424 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:08.424 "strip_size_kb": 0, 00:17:08.424 "state": "online", 00:17:08.424 "raid_level": "raid1", 00:17:08.424 "superblock": true, 00:17:08.424 "num_base_bdevs": 2, 00:17:08.424 "num_base_bdevs_discovered": 2, 00:17:08.424 "num_base_bdevs_operational": 2, 00:17:08.424 "base_bdevs_list": [ 00:17:08.424 { 00:17:08.424 "name": "pt1", 00:17:08.424 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:08.424 "is_configured": true, 00:17:08.424 "data_offset": 256, 00:17:08.424 "data_size": 7936 00:17:08.424 }, 00:17:08.424 { 00:17:08.424 "name": "pt2", 00:17:08.424 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.424 "is_configured": true, 00:17:08.424 "data_offset": 256, 00:17:08.424 "data_size": 7936 00:17:08.424 } 00:17:08.424 ] 00:17:08.424 } 00:17:08.424 } 00:17:08.424 }' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:08.424 pt2' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 [2024-11-10 15:26:14.725535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' bfaf18bb-a273-4163-8f55-51eee57b013f '!=' bfaf18bb-a273-4163-8f55-51eee57b013f ']' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 [2024-11-10 15:26:14.757337] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.424 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.684 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.684 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.684 "name": "raid_bdev1", 00:17:08.684 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:08.684 "strip_size_kb": 0, 00:17:08.684 "state": "online", 00:17:08.684 "raid_level": "raid1", 00:17:08.684 "superblock": true, 00:17:08.684 "num_base_bdevs": 2, 00:17:08.684 "num_base_bdevs_discovered": 1, 00:17:08.684 "num_base_bdevs_operational": 1, 00:17:08.684 "base_bdevs_list": [ 00:17:08.684 { 00:17:08.684 "name": null, 00:17:08.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.684 "is_configured": false, 00:17:08.684 "data_offset": 0, 00:17:08.684 "data_size": 7936 00:17:08.684 }, 00:17:08.684 { 00:17:08.684 "name": "pt2", 00:17:08.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.684 "is_configured": true, 00:17:08.684 "data_offset": 256, 00:17:08.684 "data_size": 7936 00:17:08.684 } 00:17:08.684 ] 00:17:08.684 }' 00:17:08.684 15:26:14 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.684 15:26:14 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.944 [2024-11-10 15:26:15.249446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.944 [2024-11-10 15:26:15.249477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.944 [2024-11-10 15:26:15.249536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.944 [2024-11-10 15:26:15.249575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.944 [2024-11-10 15:26:15.249586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:08.944 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.204 [2024-11-10 15:26:15.325471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:09.204 [2024-11-10 15:26:15.325522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.204 [2024-11-10 15:26:15.325537] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:09.204 [2024-11-10 15:26:15.325548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.204 [2024-11-10 15:26:15.327800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.204 [2024-11-10 15:26:15.327841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:09.204 [2024-11-10 15:26:15.327887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:09.204 [2024-11-10 15:26:15.327921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.204 [2024-11-10 15:26:15.327984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:09.204 [2024-11-10 15:26:15.328000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:09.204 [2024-11-10 15:26:15.328090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:09.204 [2024-11-10 15:26:15.328191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:09.204 [2024-11-10 15:26:15.328198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:09.204 [2024-11-10 15:26:15.328261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.204 pt2 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.204 "name": "raid_bdev1", 00:17:09.204 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:09.204 "strip_size_kb": 0, 00:17:09.204 "state": "online", 00:17:09.204 "raid_level": "raid1", 00:17:09.204 "superblock": true, 00:17:09.204 "num_base_bdevs": 2, 00:17:09.204 "num_base_bdevs_discovered": 1, 00:17:09.204 "num_base_bdevs_operational": 1, 00:17:09.204 "base_bdevs_list": [ 00:17:09.204 { 00:17:09.204 "name": null, 00:17:09.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.204 "is_configured": false, 00:17:09.204 "data_offset": 256, 00:17:09.204 "data_size": 7936 00:17:09.204 }, 00:17:09.204 { 00:17:09.204 "name": "pt2", 00:17:09.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.204 "is_configured": true, 00:17:09.204 "data_offset": 256, 00:17:09.204 "data_size": 7936 00:17:09.204 } 00:17:09.204 ] 00:17:09.204 }' 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.204 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 [2024-11-10 15:26:15.753562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.464 [2024-11-10 15:26:15.753587] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.464 [2024-11-10 15:26:15.753630] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.464 [2024-11-10 15:26:15.753668] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.464 [2024-11-10 15:26:15.753675] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.464 [2024-11-10 15:26:15.817591] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:09.464 [2024-11-10 15:26:15.817634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:09.464 [2024-11-10 15:26:15.817652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:09.464 [2024-11-10 15:26:15.817659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:09.464 [2024-11-10 15:26:15.819850] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:09.464 [2024-11-10 15:26:15.819882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:09.464 [2024-11-10 15:26:15.819926] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:09.464 [2024-11-10 15:26:15.819951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:09.464 [2024-11-10 15:26:15.820044] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:09.464 [2024-11-10 15:26:15.820054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:09.464 [2024-11-10 15:26:15.820069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:17:09.464 [2024-11-10 15:26:15.820126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:09.464 [2024-11-10 15:26:15.820182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:09.464 [2024-11-10 15:26:15.820190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:09.464 [2024-11-10 15:26:15.820260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.464 [2024-11-10 15:26:15.820346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:09.464 [2024-11-10 15:26:15.820359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:09.464 [2024-11-10 15:26:15.820437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.464 pt1 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.464 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.724 "name": "raid_bdev1", 00:17:09.724 "uuid": "bfaf18bb-a273-4163-8f55-51eee57b013f", 00:17:09.724 "strip_size_kb": 0, 00:17:09.724 "state": "online", 00:17:09.724 "raid_level": "raid1", 00:17:09.724 "superblock": true, 00:17:09.724 "num_base_bdevs": 2, 00:17:09.724 "num_base_bdevs_discovered": 1, 00:17:09.724 "num_base_bdevs_operational": 1, 00:17:09.724 "base_bdevs_list": [ 00:17:09.724 { 00:17:09.724 "name": null, 00:17:09.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.724 "is_configured": false, 00:17:09.724 "data_offset": 256, 00:17:09.724 "data_size": 7936 00:17:09.724 }, 00:17:09.724 { 00:17:09.724 "name": "pt2", 00:17:09.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.724 "is_configured": true, 00:17:09.724 "data_offset": 256, 00:17:09.724 "data_size": 7936 00:17:09.724 } 00:17:09.724 ] 00:17:09.724 }' 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.724 15:26:15 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.983 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:09.983 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.983 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:09.983 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:09.983 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:10.244 [2024-11-10 15:26:16.357957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' bfaf18bb-a273-4163-8f55-51eee57b013f '!=' bfaf18bb-a273-4163-8f55-51eee57b013f ']' 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 99185 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # '[' -z 99185 ']' 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # kill -0 99185 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # uname 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 99185 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.244 killing process with pid 99185 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 99185' 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@971 -- # kill 99185 00:17:10.244 [2024-11-10 15:26:16.440180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.244 [2024-11-10 15:26:16.440254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.244 [2024-11-10 15:26:16.440292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.244 [2024-11-10 15:26:16.440304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:10.244 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@976 -- # wait 99185 00:17:10.244 [2024-11-10 15:26:16.483353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.505 15:26:16 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:10.505 00:17:10.505 real 0m5.101s 00:17:10.505 user 0m8.109s 00:17:10.505 sys 0m1.213s 00:17:10.505 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:10.505 15:26:16 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.505 ************************************ 00:17:10.505 END TEST raid_superblock_test_md_separate 00:17:10.505 ************************************ 00:17:10.765 15:26:16 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:10.765 15:26:16 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:10.765 15:26:16 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:10.765 15:26:16 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:10.765 15:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.765 ************************************ 00:17:10.765 START TEST raid_rebuild_test_sb_md_separate 00:17:10.765 ************************************ 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false true 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=99501 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 99501 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@833 -- # '[' -z 99501 ']' 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:10.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:10.765 15:26:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.765 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.765 Zero copy mechanism will not be used. 00:17:10.765 [2024-11-10 15:26:17.000909] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:10.765 [2024-11-10 15:26:17.001054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99501 ] 00:17:11.025 [2024-11-10 15:26:17.138911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:11.025 [2024-11-10 15:26:17.178241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.025 [2024-11-10 15:26:17.219631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.025 [2024-11-10 15:26:17.297241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.025 [2024-11-10 15:26:17.297285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@866 -- # return 0 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 BaseBdev1_malloc 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 [2024-11-10 15:26:17.825829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.596 [2024-11-10 15:26:17.825903] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.596 [2024-11-10 15:26:17.825929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.596 [2024-11-10 15:26:17.825946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.596 [2024-11-10 15:26:17.828281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.596 [2024-11-10 15:26:17.828319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.596 BaseBdev1 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 BaseBdev2_malloc 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 [2024-11-10 15:26:17.862163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.596 [2024-11-10 15:26:17.862219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.596 [2024-11-10 15:26:17.862239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:11.596 [2024-11-10 15:26:17.862250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.596 [2024-11-10 15:26:17.864491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.596 [2024-11-10 15:26:17.864531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.596 BaseBdev2 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 spare_malloc 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 spare_delay 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 [2024-11-10 15:26:17.926219] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.596 [2024-11-10 15:26:17.926310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.596 [2024-11-10 15:26:17.926348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:11.596 [2024-11-10 15:26:17.926366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.596 [2024-11-10 15:26:17.929639] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.596 [2024-11-10 15:26:17.929691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.596 spare 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.596 [2024-11-10 15:26:17.938336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.596 [2024-11-10 15:26:17.940726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.596 [2024-11-10 15:26:17.940911] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:11.596 [2024-11-10 15:26:17.940927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:11.596 [2024-11-10 15:26:17.941034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:11.596 [2024-11-10 15:26:17.941163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:11.596 [2024-11-10 15:26:17.941184] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:11.596 [2024-11-10 15:26:17.941293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.596 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.856 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.856 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.856 "name": "raid_bdev1", 00:17:11.856 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:11.856 "strip_size_kb": 0, 00:17:11.856 "state": "online", 00:17:11.856 "raid_level": "raid1", 00:17:11.856 "superblock": true, 00:17:11.856 "num_base_bdevs": 2, 00:17:11.856 "num_base_bdevs_discovered": 2, 00:17:11.856 "num_base_bdevs_operational": 2, 00:17:11.856 "base_bdevs_list": [ 00:17:11.856 { 00:17:11.856 "name": "BaseBdev1", 00:17:11.856 "uuid": "714f8f9a-b1bc-51f1-8985-9847c8dd041a", 00:17:11.856 "is_configured": true, 00:17:11.856 "data_offset": 256, 00:17:11.856 "data_size": 7936 00:17:11.856 }, 00:17:11.856 { 00:17:11.856 "name": "BaseBdev2", 00:17:11.856 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:11.856 "is_configured": true, 00:17:11.856 "data_offset": 256, 00:17:11.856 "data_size": 7936 00:17:11.856 } 00:17:11.856 ] 00:17:11.856 }' 00:17:11.856 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.856 15:26:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.116 [2024-11-10 15:26:18.398671] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.116 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:12.376 [2024-11-10 15:26:18.654570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:12.376 /dev/nbd0 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.376 1+0 records in 00:17:12.376 1+0 records out 00:17:12.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422206 s, 9.7 MB/s 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:17:12.376 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:12.637 15:26:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:13.207 7936+0 records in 00:17:13.207 7936+0 records out 00:17:13.207 32505856 bytes (33 MB, 31 MiB) copied, 0.587961 s, 55.3 MB/s 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.207 [2024-11-10 15:26:19.535446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 [2024-11-10 15:26:19.543529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.207 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.477 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.477 "name": "raid_bdev1", 00:17:13.477 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:13.477 "strip_size_kb": 0, 00:17:13.477 "state": "online", 00:17:13.477 "raid_level": "raid1", 00:17:13.477 "superblock": true, 00:17:13.477 "num_base_bdevs": 2, 00:17:13.477 "num_base_bdevs_discovered": 1, 00:17:13.477 "num_base_bdevs_operational": 1, 00:17:13.477 "base_bdevs_list": [ 00:17:13.477 { 00:17:13.477 "name": null, 00:17:13.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.477 "is_configured": false, 00:17:13.477 "data_offset": 0, 00:17:13.477 "data_size": 7936 00:17:13.477 }, 00:17:13.477 { 00:17:13.477 "name": "BaseBdev2", 00:17:13.477 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:13.477 "is_configured": true, 00:17:13.477 "data_offset": 256, 00:17:13.477 "data_size": 7936 00:17:13.477 } 00:17:13.477 ] 00:17:13.477 }' 00:17:13.477 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.477 15:26:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.736 15:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.736 15:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.736 15:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.736 [2024-11-10 15:26:20.019681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.736 [2024-11-10 15:26:20.024047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d670 00:17:13.736 15:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.736 15:26:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.736 [2024-11-10 15:26:20.026241] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.674 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.674 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.674 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.674 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.674 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.934 "name": "raid_bdev1", 00:17:14.934 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:14.934 "strip_size_kb": 0, 00:17:14.934 "state": "online", 00:17:14.934 "raid_level": "raid1", 00:17:14.934 "superblock": true, 00:17:14.934 "num_base_bdevs": 2, 00:17:14.934 "num_base_bdevs_discovered": 2, 00:17:14.934 "num_base_bdevs_operational": 2, 00:17:14.934 "process": { 00:17:14.934 "type": "rebuild", 00:17:14.934 "target": "spare", 00:17:14.934 "progress": { 00:17:14.934 "blocks": 2560, 00:17:14.934 "percent": 32 00:17:14.934 } 00:17:14.934 }, 00:17:14.934 "base_bdevs_list": [ 00:17:14.934 { 00:17:14.934 "name": "spare", 00:17:14.934 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:14.934 "is_configured": true, 00:17:14.934 "data_offset": 256, 00:17:14.934 "data_size": 7936 00:17:14.934 }, 00:17:14.934 { 00:17:14.934 "name": "BaseBdev2", 00:17:14.934 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:14.934 "is_configured": true, 00:17:14.934 "data_offset": 256, 00:17:14.934 "data_size": 7936 00:17:14.934 } 00:17:14.934 ] 00:17:14.934 }' 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.934 [2024-11-10 15:26:21.192816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.934 [2024-11-10 15:26:21.236775] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.934 [2024-11-10 15:26:21.236837] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.934 [2024-11-10 15:26:21.236850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.934 [2024-11-10 15:26:21.236860] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.934 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.194 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.194 "name": "raid_bdev1", 00:17:15.194 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:15.194 "strip_size_kb": 0, 00:17:15.194 "state": "online", 00:17:15.194 "raid_level": "raid1", 00:17:15.194 "superblock": true, 00:17:15.194 "num_base_bdevs": 2, 00:17:15.194 "num_base_bdevs_discovered": 1, 00:17:15.194 "num_base_bdevs_operational": 1, 00:17:15.194 "base_bdevs_list": [ 00:17:15.194 { 00:17:15.194 "name": null, 00:17:15.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.194 "is_configured": false, 00:17:15.194 "data_offset": 0, 00:17:15.194 "data_size": 7936 00:17:15.194 }, 00:17:15.194 { 00:17:15.194 "name": "BaseBdev2", 00:17:15.194 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:15.194 "is_configured": true, 00:17:15.194 "data_offset": 256, 00:17:15.194 "data_size": 7936 00:17:15.194 } 00:17:15.194 ] 00:17:15.194 }' 00:17:15.194 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.194 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.454 "name": "raid_bdev1", 00:17:15.454 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:15.454 "strip_size_kb": 0, 00:17:15.454 "state": "online", 00:17:15.454 "raid_level": "raid1", 00:17:15.454 "superblock": true, 00:17:15.454 "num_base_bdevs": 2, 00:17:15.454 "num_base_bdevs_discovered": 1, 00:17:15.454 "num_base_bdevs_operational": 1, 00:17:15.454 "base_bdevs_list": [ 00:17:15.454 { 00:17:15.454 "name": null, 00:17:15.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.454 "is_configured": false, 00:17:15.454 "data_offset": 0, 00:17:15.454 "data_size": 7936 00:17:15.454 }, 00:17:15.454 { 00:17:15.454 "name": "BaseBdev2", 00:17:15.454 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:15.454 "is_configured": true, 00:17:15.454 "data_offset": 256, 00:17:15.454 "data_size": 7936 00:17:15.454 } 00:17:15.454 ] 00:17:15.454 }' 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.454 [2024-11-10 15:26:21.801893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.454 [2024-11-10 15:26:21.805490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d740 00:17:15.454 [2024-11-10 15:26:21.807666] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.454 15:26:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.835 "name": "raid_bdev1", 00:17:16.835 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:16.835 "strip_size_kb": 0, 00:17:16.835 "state": "online", 00:17:16.835 "raid_level": "raid1", 00:17:16.835 "superblock": true, 00:17:16.835 "num_base_bdevs": 2, 00:17:16.835 "num_base_bdevs_discovered": 2, 00:17:16.835 "num_base_bdevs_operational": 2, 00:17:16.835 "process": { 00:17:16.835 "type": "rebuild", 00:17:16.835 "target": "spare", 00:17:16.835 "progress": { 00:17:16.835 "blocks": 2560, 00:17:16.835 "percent": 32 00:17:16.835 } 00:17:16.835 }, 00:17:16.835 "base_bdevs_list": [ 00:17:16.835 { 00:17:16.835 "name": "spare", 00:17:16.835 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:16.835 "is_configured": true, 00:17:16.835 "data_offset": 256, 00:17:16.835 "data_size": 7936 00:17:16.835 }, 00:17:16.835 { 00:17:16.835 "name": "BaseBdev2", 00:17:16.835 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:16.835 "is_configured": true, 00:17:16.835 "data_offset": 256, 00:17:16.835 "data_size": 7936 00:17:16.835 } 00:17:16.835 ] 00:17:16.835 }' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:16.835 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=596 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.835 15:26:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.835 15:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.835 "name": "raid_bdev1", 00:17:16.835 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:16.835 "strip_size_kb": 0, 00:17:16.835 "state": "online", 00:17:16.835 "raid_level": "raid1", 00:17:16.835 "superblock": true, 00:17:16.835 "num_base_bdevs": 2, 00:17:16.835 "num_base_bdevs_discovered": 2, 00:17:16.835 "num_base_bdevs_operational": 2, 00:17:16.835 "process": { 00:17:16.835 "type": "rebuild", 00:17:16.835 "target": "spare", 00:17:16.835 "progress": { 00:17:16.835 "blocks": 2816, 00:17:16.835 "percent": 35 00:17:16.835 } 00:17:16.835 }, 00:17:16.835 "base_bdevs_list": [ 00:17:16.835 { 00:17:16.835 "name": "spare", 00:17:16.835 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:16.835 "is_configured": true, 00:17:16.835 "data_offset": 256, 00:17:16.835 "data_size": 7936 00:17:16.835 }, 00:17:16.835 { 00:17:16.835 "name": "BaseBdev2", 00:17:16.835 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:16.836 "is_configured": true, 00:17:16.836 "data_offset": 256, 00:17:16.836 "data_size": 7936 00:17:16.836 } 00:17:16.836 ] 00:17:16.836 }' 00:17:16.836 15:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.836 15:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.836 15:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.836 15:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.836 15:26:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.775 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.034 "name": "raid_bdev1", 00:17:18.034 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:18.034 "strip_size_kb": 0, 00:17:18.034 "state": "online", 00:17:18.034 "raid_level": "raid1", 00:17:18.034 "superblock": true, 00:17:18.034 "num_base_bdevs": 2, 00:17:18.034 "num_base_bdevs_discovered": 2, 00:17:18.034 "num_base_bdevs_operational": 2, 00:17:18.034 "process": { 00:17:18.034 "type": "rebuild", 00:17:18.034 "target": "spare", 00:17:18.034 "progress": { 00:17:18.034 "blocks": 5888, 00:17:18.034 "percent": 74 00:17:18.034 } 00:17:18.034 }, 00:17:18.034 "base_bdevs_list": [ 00:17:18.034 { 00:17:18.034 "name": "spare", 00:17:18.034 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:18.034 "is_configured": true, 00:17:18.034 "data_offset": 256, 00:17:18.034 "data_size": 7936 00:17:18.034 }, 00:17:18.034 { 00:17:18.034 "name": "BaseBdev2", 00:17:18.034 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:18.034 "is_configured": true, 00:17:18.034 "data_offset": 256, 00:17:18.034 "data_size": 7936 00:17:18.034 } 00:17:18.034 ] 00:17:18.034 }' 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.034 15:26:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.603 [2024-11-10 15:26:24.932757] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:18.603 [2024-11-10 15:26:24.932835] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:18.603 [2024-11-10 15:26:24.932939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.173 "name": "raid_bdev1", 00:17:19.173 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:19.173 "strip_size_kb": 0, 00:17:19.173 "state": "online", 00:17:19.173 "raid_level": "raid1", 00:17:19.173 "superblock": true, 00:17:19.173 "num_base_bdevs": 2, 00:17:19.173 "num_base_bdevs_discovered": 2, 00:17:19.173 "num_base_bdevs_operational": 2, 00:17:19.173 "base_bdevs_list": [ 00:17:19.173 { 00:17:19.173 "name": "spare", 00:17:19.173 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:19.173 "is_configured": true, 00:17:19.173 "data_offset": 256, 00:17:19.173 "data_size": 7936 00:17:19.173 }, 00:17:19.173 { 00:17:19.173 "name": "BaseBdev2", 00:17:19.173 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:19.173 "is_configured": true, 00:17:19.173 "data_offset": 256, 00:17:19.173 "data_size": 7936 00:17:19.173 } 00:17:19.173 ] 00:17:19.173 }' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.173 "name": "raid_bdev1", 00:17:19.173 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:19.173 "strip_size_kb": 0, 00:17:19.173 "state": "online", 00:17:19.173 "raid_level": "raid1", 00:17:19.173 "superblock": true, 00:17:19.173 "num_base_bdevs": 2, 00:17:19.173 "num_base_bdevs_discovered": 2, 00:17:19.173 "num_base_bdevs_operational": 2, 00:17:19.173 "base_bdevs_list": [ 00:17:19.173 { 00:17:19.173 "name": "spare", 00:17:19.173 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:19.173 "is_configured": true, 00:17:19.173 "data_offset": 256, 00:17:19.173 "data_size": 7936 00:17:19.173 }, 00:17:19.173 { 00:17:19.173 "name": "BaseBdev2", 00:17:19.173 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:19.173 "is_configured": true, 00:17:19.173 "data_offset": 256, 00:17:19.173 "data_size": 7936 00:17:19.173 } 00:17:19.173 ] 00:17:19.173 }' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.173 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.433 "name": "raid_bdev1", 00:17:19.433 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:19.433 "strip_size_kb": 0, 00:17:19.433 "state": "online", 00:17:19.433 "raid_level": "raid1", 00:17:19.433 "superblock": true, 00:17:19.433 "num_base_bdevs": 2, 00:17:19.433 "num_base_bdevs_discovered": 2, 00:17:19.433 "num_base_bdevs_operational": 2, 00:17:19.433 "base_bdevs_list": [ 00:17:19.433 { 00:17:19.433 "name": "spare", 00:17:19.433 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:19.433 "is_configured": true, 00:17:19.433 "data_offset": 256, 00:17:19.433 "data_size": 7936 00:17:19.433 }, 00:17:19.433 { 00:17:19.433 "name": "BaseBdev2", 00:17:19.433 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:19.433 "is_configured": true, 00:17:19.433 "data_offset": 256, 00:17:19.433 "data_size": 7936 00:17:19.433 } 00:17:19.433 ] 00:17:19.433 }' 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.433 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.693 [2024-11-10 15:26:25.985370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.693 [2024-11-10 15:26:25.985405] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.693 [2024-11-10 15:26:25.985487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.693 [2024-11-10 15:26:25.985552] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.693 [2024-11-10 15:26:25.985562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.693 15:26:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.693 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.953 /dev/nbd0 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.953 1+0 records in 00:17:19.953 1+0 records out 00:17:19.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447098 s, 9.2 MB/s 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.953 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:20.213 /dev/nbd1 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # local i 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # break 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.213 1+0 records in 00:17:20.213 1+0 records out 00:17:20.213 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000469457 s, 8.7 MB/s 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # size=4096 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # return 0 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:20.213 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.473 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.733 15:26:26 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.733 [2024-11-10 15:26:27.070958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.733 [2024-11-10 15:26:27.071024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.733 [2024-11-10 15:26:27.071053] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:20.733 [2024-11-10 15:26:27.071062] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.733 [2024-11-10 15:26:27.073300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.733 [2024-11-10 15:26:27.073336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.733 [2024-11-10 15:26:27.073389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.733 [2024-11-10 15:26:27.073429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.733 [2024-11-10 15:26:27.073535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.733 spare 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.733 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.993 [2024-11-10 15:26:27.173605] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:20.993 [2024-11-10 15:26:27.173636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.993 [2024-11-10 15:26:27.173742] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1f60 00:17:20.993 [2024-11-10 15:26:27.173845] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:20.993 [2024-11-10 15:26:27.173852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:20.993 [2024-11-10 15:26:27.173940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.993 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.993 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.993 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.993 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.994 "name": "raid_bdev1", 00:17:20.994 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:20.994 "strip_size_kb": 0, 00:17:20.994 "state": "online", 00:17:20.994 "raid_level": "raid1", 00:17:20.994 "superblock": true, 00:17:20.994 "num_base_bdevs": 2, 00:17:20.994 "num_base_bdevs_discovered": 2, 00:17:20.994 "num_base_bdevs_operational": 2, 00:17:20.994 "base_bdevs_list": [ 00:17:20.994 { 00:17:20.994 "name": "spare", 00:17:20.994 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:20.994 "is_configured": true, 00:17:20.994 "data_offset": 256, 00:17:20.994 "data_size": 7936 00:17:20.994 }, 00:17:20.994 { 00:17:20.994 "name": "BaseBdev2", 00:17:20.994 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:20.994 "is_configured": true, 00:17:20.994 "data_offset": 256, 00:17:20.994 "data_size": 7936 00:17:20.994 } 00:17:20.994 ] 00:17:20.994 }' 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.994 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.254 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.514 "name": "raid_bdev1", 00:17:21.514 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:21.514 "strip_size_kb": 0, 00:17:21.514 "state": "online", 00:17:21.514 "raid_level": "raid1", 00:17:21.514 "superblock": true, 00:17:21.514 "num_base_bdevs": 2, 00:17:21.514 "num_base_bdevs_discovered": 2, 00:17:21.514 "num_base_bdevs_operational": 2, 00:17:21.514 "base_bdevs_list": [ 00:17:21.514 { 00:17:21.514 "name": "spare", 00:17:21.514 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:21.514 "is_configured": true, 00:17:21.514 "data_offset": 256, 00:17:21.514 "data_size": 7936 00:17:21.514 }, 00:17:21.514 { 00:17:21.514 "name": "BaseBdev2", 00:17:21.514 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:21.514 "is_configured": true, 00:17:21.514 "data_offset": 256, 00:17:21.514 "data_size": 7936 00:17:21.514 } 00:17:21.514 ] 00:17:21.514 }' 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.514 [2024-11-10 15:26:27.791197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.514 "name": "raid_bdev1", 00:17:21.514 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:21.514 "strip_size_kb": 0, 00:17:21.514 "state": "online", 00:17:21.514 "raid_level": "raid1", 00:17:21.514 "superblock": true, 00:17:21.514 "num_base_bdevs": 2, 00:17:21.514 "num_base_bdevs_discovered": 1, 00:17:21.514 "num_base_bdevs_operational": 1, 00:17:21.514 "base_bdevs_list": [ 00:17:21.514 { 00:17:21.514 "name": null, 00:17:21.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.514 "is_configured": false, 00:17:21.514 "data_offset": 0, 00:17:21.514 "data_size": 7936 00:17:21.514 }, 00:17:21.514 { 00:17:21.514 "name": "BaseBdev2", 00:17:21.514 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:21.514 "is_configured": true, 00:17:21.514 "data_offset": 256, 00:17:21.514 "data_size": 7936 00:17:21.514 } 00:17:21.514 ] 00:17:21.514 }' 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.514 15:26:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.084 15:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:22.084 15:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.084 15:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.084 [2024-11-10 15:26:28.251366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.084 [2024-11-10 15:26:28.251535] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.084 [2024-11-10 15:26:28.251556] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:22.084 [2024-11-10 15:26:28.251615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.084 [2024-11-10 15:26:28.255783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2030 00:17:22.084 15:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.084 15:26:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:22.084 [2024-11-10 15:26:28.257926] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.023 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.024 "name": "raid_bdev1", 00:17:23.024 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:23.024 "strip_size_kb": 0, 00:17:23.024 "state": "online", 00:17:23.024 "raid_level": "raid1", 00:17:23.024 "superblock": true, 00:17:23.024 "num_base_bdevs": 2, 00:17:23.024 "num_base_bdevs_discovered": 2, 00:17:23.024 "num_base_bdevs_operational": 2, 00:17:23.024 "process": { 00:17:23.024 "type": "rebuild", 00:17:23.024 "target": "spare", 00:17:23.024 "progress": { 00:17:23.024 "blocks": 2560, 00:17:23.024 "percent": 32 00:17:23.024 } 00:17:23.024 }, 00:17:23.024 "base_bdevs_list": [ 00:17:23.024 { 00:17:23.024 "name": "spare", 00:17:23.024 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:23.024 "is_configured": true, 00:17:23.024 "data_offset": 256, 00:17:23.024 "data_size": 7936 00:17:23.024 }, 00:17:23.024 { 00:17:23.024 "name": "BaseBdev2", 00:17:23.024 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:23.024 "is_configured": true, 00:17:23.024 "data_offset": 256, 00:17:23.024 "data_size": 7936 00:17:23.024 } 00:17:23.024 ] 00:17:23.024 }' 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.024 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.283 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.284 [2024-11-10 15:26:29.423666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.284 [2024-11-10 15:26:29.467593] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:23.284 [2024-11-10 15:26:29.467669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.284 [2024-11-10 15:26:29.467683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.284 [2024-11-10 15:26:29.467693] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.284 "name": "raid_bdev1", 00:17:23.284 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:23.284 "strip_size_kb": 0, 00:17:23.284 "state": "online", 00:17:23.284 "raid_level": "raid1", 00:17:23.284 "superblock": true, 00:17:23.284 "num_base_bdevs": 2, 00:17:23.284 "num_base_bdevs_discovered": 1, 00:17:23.284 "num_base_bdevs_operational": 1, 00:17:23.284 "base_bdevs_list": [ 00:17:23.284 { 00:17:23.284 "name": null, 00:17:23.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.284 "is_configured": false, 00:17:23.284 "data_offset": 0, 00:17:23.284 "data_size": 7936 00:17:23.284 }, 00:17:23.284 { 00:17:23.284 "name": "BaseBdev2", 00:17:23.284 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:23.284 "is_configured": true, 00:17:23.284 "data_offset": 256, 00:17:23.284 "data_size": 7936 00:17:23.284 } 00:17:23.284 ] 00:17:23.284 }' 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.284 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.853 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.853 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.853 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.853 [2024-11-10 15:26:29.928519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.853 [2024-11-10 15:26:29.928580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.853 [2024-11-10 15:26:29.928608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:23.853 [2024-11-10 15:26:29.928620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.853 [2024-11-10 15:26:29.928885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.853 [2024-11-10 15:26:29.928912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.853 [2024-11-10 15:26:29.928983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.853 [2024-11-10 15:26:29.929001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.853 [2024-11-10 15:26:29.929012] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:23.853 [2024-11-10 15:26:29.929047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.853 [2024-11-10 15:26:29.932348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c2100 00:17:23.853 [2024-11-10 15:26:29.934454] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.853 spare 00:17:23.853 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.853 15:26:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.793 "name": "raid_bdev1", 00:17:24.793 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:24.793 "strip_size_kb": 0, 00:17:24.793 "state": "online", 00:17:24.793 "raid_level": "raid1", 00:17:24.793 "superblock": true, 00:17:24.793 "num_base_bdevs": 2, 00:17:24.793 "num_base_bdevs_discovered": 2, 00:17:24.793 "num_base_bdevs_operational": 2, 00:17:24.793 "process": { 00:17:24.793 "type": "rebuild", 00:17:24.793 "target": "spare", 00:17:24.793 "progress": { 00:17:24.793 "blocks": 2560, 00:17:24.793 "percent": 32 00:17:24.793 } 00:17:24.793 }, 00:17:24.793 "base_bdevs_list": [ 00:17:24.793 { 00:17:24.793 "name": "spare", 00:17:24.793 "uuid": "57f9990f-9843-5dfa-817f-78dde3cda248", 00:17:24.793 "is_configured": true, 00:17:24.793 "data_offset": 256, 00:17:24.793 "data_size": 7936 00:17:24.793 }, 00:17:24.793 { 00:17:24.793 "name": "BaseBdev2", 00:17:24.793 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:24.793 "is_configured": true, 00:17:24.793 "data_offset": 256, 00:17:24.793 "data_size": 7936 00:17:24.793 } 00:17:24.793 ] 00:17:24.793 }' 00:17:24.793 15:26:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.793 [2024-11-10 15:26:31.076571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.793 [2024-11-10 15:26:31.144249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.793 [2024-11-10 15:26:31.144306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.793 [2024-11-10 15:26:31.144323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.793 [2024-11-10 15:26:31.144331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:24.793 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.053 "name": "raid_bdev1", 00:17:25.053 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:25.053 "strip_size_kb": 0, 00:17:25.053 "state": "online", 00:17:25.053 "raid_level": "raid1", 00:17:25.053 "superblock": true, 00:17:25.053 "num_base_bdevs": 2, 00:17:25.053 "num_base_bdevs_discovered": 1, 00:17:25.053 "num_base_bdevs_operational": 1, 00:17:25.053 "base_bdevs_list": [ 00:17:25.053 { 00:17:25.053 "name": null, 00:17:25.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.053 "is_configured": false, 00:17:25.053 "data_offset": 0, 00:17:25.053 "data_size": 7936 00:17:25.053 }, 00:17:25.053 { 00:17:25.053 "name": "BaseBdev2", 00:17:25.053 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:25.053 "is_configured": true, 00:17:25.053 "data_offset": 256, 00:17:25.053 "data_size": 7936 00:17:25.053 } 00:17:25.053 ] 00:17:25.053 }' 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.053 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.313 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.573 "name": "raid_bdev1", 00:17:25.573 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:25.573 "strip_size_kb": 0, 00:17:25.573 "state": "online", 00:17:25.573 "raid_level": "raid1", 00:17:25.573 "superblock": true, 00:17:25.573 "num_base_bdevs": 2, 00:17:25.573 "num_base_bdevs_discovered": 1, 00:17:25.573 "num_base_bdevs_operational": 1, 00:17:25.573 "base_bdevs_list": [ 00:17:25.573 { 00:17:25.573 "name": null, 00:17:25.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.573 "is_configured": false, 00:17:25.573 "data_offset": 0, 00:17:25.573 "data_size": 7936 00:17:25.573 }, 00:17:25.573 { 00:17:25.573 "name": "BaseBdev2", 00:17:25.573 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:25.573 "is_configured": true, 00:17:25.573 "data_offset": 256, 00:17:25.573 "data_size": 7936 00:17:25.573 } 00:17:25.573 ] 00:17:25.573 }' 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.573 [2024-11-10 15:26:31.796868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.573 [2024-11-10 15:26:31.796921] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.573 [2024-11-10 15:26:31.796944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:25.573 [2024-11-10 15:26:31.796952] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.573 [2024-11-10 15:26:31.797164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.573 [2024-11-10 15:26:31.797197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.573 [2024-11-10 15:26:31.797254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:25.573 [2024-11-10 15:26:31.797288] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.573 [2024-11-10 15:26:31.797298] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.573 [2024-11-10 15:26:31.797325] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:25.573 BaseBdev1 00:17:25.573 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.574 15:26:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.514 "name": "raid_bdev1", 00:17:26.514 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:26.514 "strip_size_kb": 0, 00:17:26.514 "state": "online", 00:17:26.514 "raid_level": "raid1", 00:17:26.514 "superblock": true, 00:17:26.514 "num_base_bdevs": 2, 00:17:26.514 "num_base_bdevs_discovered": 1, 00:17:26.514 "num_base_bdevs_operational": 1, 00:17:26.514 "base_bdevs_list": [ 00:17:26.514 { 00:17:26.514 "name": null, 00:17:26.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.514 "is_configured": false, 00:17:26.514 "data_offset": 0, 00:17:26.514 "data_size": 7936 00:17:26.514 }, 00:17:26.514 { 00:17:26.514 "name": "BaseBdev2", 00:17:26.514 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:26.514 "is_configured": true, 00:17:26.514 "data_offset": 256, 00:17:26.514 "data_size": 7936 00:17:26.514 } 00:17:26.514 ] 00:17:26.514 }' 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.514 15:26:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.084 "name": "raid_bdev1", 00:17:27.084 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:27.084 "strip_size_kb": 0, 00:17:27.084 "state": "online", 00:17:27.084 "raid_level": "raid1", 00:17:27.084 "superblock": true, 00:17:27.084 "num_base_bdevs": 2, 00:17:27.084 "num_base_bdevs_discovered": 1, 00:17:27.084 "num_base_bdevs_operational": 1, 00:17:27.084 "base_bdevs_list": [ 00:17:27.084 { 00:17:27.084 "name": null, 00:17:27.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.084 "is_configured": false, 00:17:27.084 "data_offset": 0, 00:17:27.084 "data_size": 7936 00:17:27.084 }, 00:17:27.084 { 00:17:27.084 "name": "BaseBdev2", 00:17:27.084 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:27.084 "is_configured": true, 00:17:27.084 "data_offset": 256, 00:17:27.084 "data_size": 7936 00:17:27.084 } 00:17:27.084 ] 00:17:27.084 }' 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.084 [2024-11-10 15:26:33.437352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.084 [2024-11-10 15:26:33.437488] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.084 [2024-11-10 15:26:33.437503] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:27.084 request: 00:17:27.084 { 00:17:27.084 "base_bdev": "BaseBdev1", 00:17:27.084 "raid_bdev": "raid_bdev1", 00:17:27.084 "method": "bdev_raid_add_base_bdev", 00:17:27.084 "req_id": 1 00:17:27.084 } 00:17:27.084 Got JSON-RPC error response 00:17:27.084 response: 00:17:27.084 { 00:17:27.084 "code": -22, 00:17:27.084 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:27.084 } 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:17:27.084 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:27.343 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:27.343 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:27.343 15:26:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.282 "name": "raid_bdev1", 00:17:28.282 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:28.282 "strip_size_kb": 0, 00:17:28.282 "state": "online", 00:17:28.282 "raid_level": "raid1", 00:17:28.282 "superblock": true, 00:17:28.282 "num_base_bdevs": 2, 00:17:28.282 "num_base_bdevs_discovered": 1, 00:17:28.282 "num_base_bdevs_operational": 1, 00:17:28.282 "base_bdevs_list": [ 00:17:28.282 { 00:17:28.282 "name": null, 00:17:28.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.282 "is_configured": false, 00:17:28.282 "data_offset": 0, 00:17:28.282 "data_size": 7936 00:17:28.282 }, 00:17:28.282 { 00:17:28.282 "name": "BaseBdev2", 00:17:28.282 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:28.282 "is_configured": true, 00:17:28.282 "data_offset": 256, 00:17:28.282 "data_size": 7936 00:17:28.282 } 00:17:28.282 ] 00:17:28.282 }' 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.282 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.852 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.852 "name": "raid_bdev1", 00:17:28.852 "uuid": "b824d4b7-b5fb-4a41-a185-40d21e858e77", 00:17:28.852 "strip_size_kb": 0, 00:17:28.852 "state": "online", 00:17:28.853 "raid_level": "raid1", 00:17:28.853 "superblock": true, 00:17:28.853 "num_base_bdevs": 2, 00:17:28.853 "num_base_bdevs_discovered": 1, 00:17:28.853 "num_base_bdevs_operational": 1, 00:17:28.853 "base_bdevs_list": [ 00:17:28.853 { 00:17:28.853 "name": null, 00:17:28.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.853 "is_configured": false, 00:17:28.853 "data_offset": 0, 00:17:28.853 "data_size": 7936 00:17:28.853 }, 00:17:28.853 { 00:17:28.853 "name": "BaseBdev2", 00:17:28.853 "uuid": "c90f4173-5159-5274-b4c8-fa779be64620", 00:17:28.853 "is_configured": true, 00:17:28.853 "data_offset": 256, 00:17:28.853 "data_size": 7936 00:17:28.853 } 00:17:28.853 ] 00:17:28.853 }' 00:17:28.853 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.853 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.853 15:26:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 99501 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@952 -- # '[' -z 99501 ']' 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # kill -0 99501 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # uname 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 99501 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:28.853 killing process with pid 99501 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@970 -- # echo 'killing process with pid 99501' 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@971 -- # kill 99501 00:17:28.853 Received shutdown signal, test time was about 60.000000 seconds 00:17:28.853 00:17:28.853 Latency(us) 00:17:28.853 [2024-11-10T15:26:35.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.853 [2024-11-10T15:26:35.216Z] =================================================================================================================== 00:17:28.853 [2024-11-10T15:26:35.216Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.853 [2024-11-10 15:26:35.086531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.853 [2024-11-10 15:26:35.086641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.853 [2024-11-10 15:26:35.086691] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.853 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@976 -- # wait 99501 00:17:28.853 [2024-11-10 15:26:35.086702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:28.853 [2024-11-10 15:26:35.147413] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.113 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:29.113 00:17:29.113 real 0m18.567s 00:17:29.113 user 0m24.585s 00:17:29.113 sys 0m2.751s 00:17:29.113 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:29.113 15:26:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.113 ************************************ 00:17:29.113 END TEST raid_rebuild_test_sb_md_separate 00:17:29.113 ************************************ 00:17:29.373 15:26:35 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:29.373 15:26:35 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:29.373 15:26:35 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:29.373 15:26:35 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:29.373 15:26:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.373 ************************************ 00:17:29.373 START TEST raid_state_function_test_sb_md_interleaved 00:17:29.373 ************************************ 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_state_function_test raid1 2 true 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=100192 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:29.373 Process raid pid: 100192 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 100192' 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 100192 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 100192 ']' 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:29.373 15:26:35 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:29.373 [2024-11-10 15:26:35.640902] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:29.373 [2024-11-10 15:26:35.641063] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.633 [2024-11-10 15:26:35.775653] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:29.633 [2024-11-10 15:26:35.814979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.633 [2024-11-10 15:26:35.855321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.633 [2024-11-10 15:26:35.932264] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.633 [2024-11-10 15:26:35.932303] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.212 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:30.212 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:17:30.212 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:30.212 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.212 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.212 [2024-11-10 15:26:36.468905] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.212 [2024-11-10 15:26:36.468959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.212 [2024-11-10 15:26:36.468971] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.212 [2024-11-10 15:26:36.468978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.212 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.213 "name": "Existed_Raid", 00:17:30.213 "uuid": "225252ed-a8eb-44ca-8176-0cd54f01130f", 00:17:30.213 "strip_size_kb": 0, 00:17:30.213 "state": "configuring", 00:17:30.213 "raid_level": "raid1", 00:17:30.213 "superblock": true, 00:17:30.213 "num_base_bdevs": 2, 00:17:30.213 "num_base_bdevs_discovered": 0, 00:17:30.213 "num_base_bdevs_operational": 2, 00:17:30.213 "base_bdevs_list": [ 00:17:30.213 { 00:17:30.213 "name": "BaseBdev1", 00:17:30.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.213 "is_configured": false, 00:17:30.213 "data_offset": 0, 00:17:30.213 "data_size": 0 00:17:30.213 }, 00:17:30.213 { 00:17:30.213 "name": "BaseBdev2", 00:17:30.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.213 "is_configured": false, 00:17:30.213 "data_offset": 0, 00:17:30.213 "data_size": 0 00:17:30.213 } 00:17:30.213 ] 00:17:30.213 }' 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.213 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 [2024-11-10 15:26:36.944911] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:30.782 [2024-11-10 15:26:36.944946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name Existed_Raid, state configuring 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 [2024-11-10 15:26:36.956932] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.782 [2024-11-10 15:26:36.956968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.782 [2024-11-10 15:26:36.956978] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.782 [2024-11-10 15:26:36.956984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 [2024-11-10 15:26:36.984250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.782 BaseBdev1 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev1 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.782 15:26:36 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 [ 00:17:30.782 { 00:17:30.782 "name": "BaseBdev1", 00:17:30.782 "aliases": [ 00:17:30.782 "ebc7c283-cec1-4e49-afb1-e9e739fb53ad" 00:17:30.782 ], 00:17:30.782 "product_name": "Malloc disk", 00:17:30.782 "block_size": 4128, 00:17:30.782 "num_blocks": 8192, 00:17:30.782 "uuid": "ebc7c283-cec1-4e49-afb1-e9e739fb53ad", 00:17:30.782 "md_size": 32, 00:17:30.782 "md_interleave": true, 00:17:30.782 "dif_type": 0, 00:17:30.782 "assigned_rate_limits": { 00:17:30.782 "rw_ios_per_sec": 0, 00:17:30.782 "rw_mbytes_per_sec": 0, 00:17:30.782 "r_mbytes_per_sec": 0, 00:17:30.782 "w_mbytes_per_sec": 0 00:17:30.782 }, 00:17:30.782 "claimed": true, 00:17:30.782 "claim_type": "exclusive_write", 00:17:30.782 "zoned": false, 00:17:30.782 "supported_io_types": { 00:17:30.782 "read": true, 00:17:30.782 "write": true, 00:17:30.782 "unmap": true, 00:17:30.782 "flush": true, 00:17:30.782 "reset": true, 00:17:30.782 "nvme_admin": false, 00:17:30.782 "nvme_io": false, 00:17:30.782 "nvme_io_md": false, 00:17:30.782 "write_zeroes": true, 00:17:30.782 "zcopy": true, 00:17:30.782 "get_zone_info": false, 00:17:30.782 "zone_management": false, 00:17:30.782 "zone_append": false, 00:17:30.782 "compare": false, 00:17:30.782 "compare_and_write": false, 00:17:30.782 "abort": true, 00:17:30.782 "seek_hole": false, 00:17:30.782 "seek_data": false, 00:17:30.782 "copy": true, 00:17:30.782 "nvme_iov_md": false 00:17:30.782 }, 00:17:30.782 "memory_domains": [ 00:17:30.782 { 00:17:30.782 "dma_device_id": "system", 00:17:30.782 "dma_device_type": 1 00:17:30.782 }, 00:17:30.782 { 00:17:30.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.782 "dma_device_type": 2 00:17:30.782 } 00:17:30.782 ], 00:17:30.782 "driver_specific": {} 00:17:30.782 } 00:17:30.782 ] 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.782 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.783 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.783 "name": "Existed_Raid", 00:17:30.783 "uuid": "523bcb36-5abd-4026-b992-53f647dce7db", 00:17:30.783 "strip_size_kb": 0, 00:17:30.783 "state": "configuring", 00:17:30.783 "raid_level": "raid1", 00:17:30.783 "superblock": true, 00:17:30.783 "num_base_bdevs": 2, 00:17:30.783 "num_base_bdevs_discovered": 1, 00:17:30.783 "num_base_bdevs_operational": 2, 00:17:30.783 "base_bdevs_list": [ 00:17:30.783 { 00:17:30.783 "name": "BaseBdev1", 00:17:30.783 "uuid": "ebc7c283-cec1-4e49-afb1-e9e739fb53ad", 00:17:30.783 "is_configured": true, 00:17:30.783 "data_offset": 256, 00:17:30.783 "data_size": 7936 00:17:30.783 }, 00:17:30.783 { 00:17:30.783 "name": "BaseBdev2", 00:17:30.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.783 "is_configured": false, 00:17:30.783 "data_offset": 0, 00:17:30.783 "data_size": 0 00:17:30.783 } 00:17:30.783 ] 00:17:30.783 }' 00:17:30.783 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.783 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.352 [2024-11-10 15:26:37.508426] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.352 [2024-11-10 15:26:37.508470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.352 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.352 [2024-11-10 15:26:37.520515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.352 [2024-11-10 15:26:37.522503] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.352 [2024-11-10 15:26:37.522541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.353 "name": "Existed_Raid", 00:17:31.353 "uuid": "116c7a26-b94a-41da-9d3f-3e06ca1cf76d", 00:17:31.353 "strip_size_kb": 0, 00:17:31.353 "state": "configuring", 00:17:31.353 "raid_level": "raid1", 00:17:31.353 "superblock": true, 00:17:31.353 "num_base_bdevs": 2, 00:17:31.353 "num_base_bdevs_discovered": 1, 00:17:31.353 "num_base_bdevs_operational": 2, 00:17:31.353 "base_bdevs_list": [ 00:17:31.353 { 00:17:31.353 "name": "BaseBdev1", 00:17:31.353 "uuid": "ebc7c283-cec1-4e49-afb1-e9e739fb53ad", 00:17:31.353 "is_configured": true, 00:17:31.353 "data_offset": 256, 00:17:31.353 "data_size": 7936 00:17:31.353 }, 00:17:31.353 { 00:17:31.353 "name": "BaseBdev2", 00:17:31.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.353 "is_configured": false, 00:17:31.353 "data_offset": 0, 00:17:31.353 "data_size": 0 00:17:31.353 } 00:17:31.353 ] 00:17:31.353 }' 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.353 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.613 [2024-11-10 15:26:37.945691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:31.613 [2024-11-10 15:26:37.945871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:31.613 [2024-11-10 15:26:37.945891] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:31.613 [2024-11-10 15:26:37.946003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:31.613 [2024-11-10 15:26:37.946106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:31.613 [2024-11-10 15:26:37.946131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007b00 00:17:31.613 [2024-11-10 15:26:37.946219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.613 BaseBdev2 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local bdev_name=BaseBdev2 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local i 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_wait_for_examine 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.613 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.613 [ 00:17:31.613 { 00:17:31.613 "name": "BaseBdev2", 00:17:31.613 "aliases": [ 00:17:31.613 "3ca41bea-be9c-480d-9551-0e21cf913308" 00:17:31.613 ], 00:17:31.613 "product_name": "Malloc disk", 00:17:31.613 "block_size": 4128, 00:17:31.613 "num_blocks": 8192, 00:17:31.613 "uuid": "3ca41bea-be9c-480d-9551-0e21cf913308", 00:17:31.613 "md_size": 32, 00:17:31.613 "md_interleave": true, 00:17:31.613 "dif_type": 0, 00:17:31.613 "assigned_rate_limits": { 00:17:31.613 "rw_ios_per_sec": 0, 00:17:31.613 "rw_mbytes_per_sec": 0, 00:17:31.613 "r_mbytes_per_sec": 0, 00:17:31.613 "w_mbytes_per_sec": 0 00:17:31.613 }, 00:17:31.613 "claimed": true, 00:17:31.873 "claim_type": "exclusive_write", 00:17:31.873 "zoned": false, 00:17:31.873 "supported_io_types": { 00:17:31.873 "read": true, 00:17:31.873 "write": true, 00:17:31.873 "unmap": true, 00:17:31.873 "flush": true, 00:17:31.873 "reset": true, 00:17:31.873 "nvme_admin": false, 00:17:31.873 "nvme_io": false, 00:17:31.873 "nvme_io_md": false, 00:17:31.873 "write_zeroes": true, 00:17:31.873 "zcopy": true, 00:17:31.873 "get_zone_info": false, 00:17:31.873 "zone_management": false, 00:17:31.873 "zone_append": false, 00:17:31.873 "compare": false, 00:17:31.873 "compare_and_write": false, 00:17:31.873 "abort": true, 00:17:31.873 "seek_hole": false, 00:17:31.873 "seek_data": false, 00:17:31.873 "copy": true, 00:17:31.873 "nvme_iov_md": false 00:17:31.873 }, 00:17:31.873 "memory_domains": [ 00:17:31.873 { 00:17:31.873 "dma_device_id": "system", 00:17:31.873 "dma_device_type": 1 00:17:31.873 }, 00:17:31.873 { 00:17:31.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.873 "dma_device_type": 2 00:17:31.873 } 00:17:31.873 ], 00:17:31.873 "driver_specific": {} 00:17:31.873 } 00:17:31.873 ] 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@909 -- # return 0 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.873 15:26:37 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.873 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.873 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.873 "name": "Existed_Raid", 00:17:31.873 "uuid": "116c7a26-b94a-41da-9d3f-3e06ca1cf76d", 00:17:31.873 "strip_size_kb": 0, 00:17:31.873 "state": "online", 00:17:31.873 "raid_level": "raid1", 00:17:31.873 "superblock": true, 00:17:31.873 "num_base_bdevs": 2, 00:17:31.873 "num_base_bdevs_discovered": 2, 00:17:31.873 "num_base_bdevs_operational": 2, 00:17:31.873 "base_bdevs_list": [ 00:17:31.873 { 00:17:31.873 "name": "BaseBdev1", 00:17:31.873 "uuid": "ebc7c283-cec1-4e49-afb1-e9e739fb53ad", 00:17:31.873 "is_configured": true, 00:17:31.873 "data_offset": 256, 00:17:31.873 "data_size": 7936 00:17:31.873 }, 00:17:31.873 { 00:17:31.873 "name": "BaseBdev2", 00:17:31.874 "uuid": "3ca41bea-be9c-480d-9551-0e21cf913308", 00:17:31.874 "is_configured": true, 00:17:31.874 "data_offset": 256, 00:17:31.874 "data_size": 7936 00:17:31.874 } 00:17:31.874 ] 00:17:31.874 }' 00:17:31.874 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.874 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.135 [2024-11-10 15:26:38.414116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.135 "name": "Existed_Raid", 00:17:32.135 "aliases": [ 00:17:32.135 "116c7a26-b94a-41da-9d3f-3e06ca1cf76d" 00:17:32.135 ], 00:17:32.135 "product_name": "Raid Volume", 00:17:32.135 "block_size": 4128, 00:17:32.135 "num_blocks": 7936, 00:17:32.135 "uuid": "116c7a26-b94a-41da-9d3f-3e06ca1cf76d", 00:17:32.135 "md_size": 32, 00:17:32.135 "md_interleave": true, 00:17:32.135 "dif_type": 0, 00:17:32.135 "assigned_rate_limits": { 00:17:32.135 "rw_ios_per_sec": 0, 00:17:32.135 "rw_mbytes_per_sec": 0, 00:17:32.135 "r_mbytes_per_sec": 0, 00:17:32.135 "w_mbytes_per_sec": 0 00:17:32.135 }, 00:17:32.135 "claimed": false, 00:17:32.135 "zoned": false, 00:17:32.135 "supported_io_types": { 00:17:32.135 "read": true, 00:17:32.135 "write": true, 00:17:32.135 "unmap": false, 00:17:32.135 "flush": false, 00:17:32.135 "reset": true, 00:17:32.135 "nvme_admin": false, 00:17:32.135 "nvme_io": false, 00:17:32.135 "nvme_io_md": false, 00:17:32.135 "write_zeroes": true, 00:17:32.135 "zcopy": false, 00:17:32.135 "get_zone_info": false, 00:17:32.135 "zone_management": false, 00:17:32.135 "zone_append": false, 00:17:32.135 "compare": false, 00:17:32.135 "compare_and_write": false, 00:17:32.135 "abort": false, 00:17:32.135 "seek_hole": false, 00:17:32.135 "seek_data": false, 00:17:32.135 "copy": false, 00:17:32.135 "nvme_iov_md": false 00:17:32.135 }, 00:17:32.135 "memory_domains": [ 00:17:32.135 { 00:17:32.135 "dma_device_id": "system", 00:17:32.135 "dma_device_type": 1 00:17:32.135 }, 00:17:32.135 { 00:17:32.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.135 "dma_device_type": 2 00:17:32.135 }, 00:17:32.135 { 00:17:32.135 "dma_device_id": "system", 00:17:32.135 "dma_device_type": 1 00:17:32.135 }, 00:17:32.135 { 00:17:32.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.135 "dma_device_type": 2 00:17:32.135 } 00:17:32.135 ], 00:17:32.135 "driver_specific": { 00:17:32.135 "raid": { 00:17:32.135 "uuid": "116c7a26-b94a-41da-9d3f-3e06ca1cf76d", 00:17:32.135 "strip_size_kb": 0, 00:17:32.135 "state": "online", 00:17:32.135 "raid_level": "raid1", 00:17:32.135 "superblock": true, 00:17:32.135 "num_base_bdevs": 2, 00:17:32.135 "num_base_bdevs_discovered": 2, 00:17:32.135 "num_base_bdevs_operational": 2, 00:17:32.135 "base_bdevs_list": [ 00:17:32.135 { 00:17:32.135 "name": "BaseBdev1", 00:17:32.135 "uuid": "ebc7c283-cec1-4e49-afb1-e9e739fb53ad", 00:17:32.135 "is_configured": true, 00:17:32.135 "data_offset": 256, 00:17:32.135 "data_size": 7936 00:17:32.135 }, 00:17:32.135 { 00:17:32.135 "name": "BaseBdev2", 00:17:32.135 "uuid": "3ca41bea-be9c-480d-9551-0e21cf913308", 00:17:32.135 "is_configured": true, 00:17:32.135 "data_offset": 256, 00:17:32.135 "data_size": 7936 00:17:32.135 } 00:17:32.135 ] 00:17:32.135 } 00:17:32.135 } 00:17:32.135 }' 00:17:32.135 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:32.403 BaseBdev2' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.403 [2024-11-10 15:26:38.637981] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.403 "name": "Existed_Raid", 00:17:32.403 "uuid": "116c7a26-b94a-41da-9d3f-3e06ca1cf76d", 00:17:32.403 "strip_size_kb": 0, 00:17:32.403 "state": "online", 00:17:32.403 "raid_level": "raid1", 00:17:32.403 "superblock": true, 00:17:32.403 "num_base_bdevs": 2, 00:17:32.403 "num_base_bdevs_discovered": 1, 00:17:32.403 "num_base_bdevs_operational": 1, 00:17:32.403 "base_bdevs_list": [ 00:17:32.403 { 00:17:32.403 "name": null, 00:17:32.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.403 "is_configured": false, 00:17:32.403 "data_offset": 0, 00:17:32.403 "data_size": 7936 00:17:32.403 }, 00:17:32.403 { 00:17:32.403 "name": "BaseBdev2", 00:17:32.403 "uuid": "3ca41bea-be9c-480d-9551-0e21cf913308", 00:17:32.403 "is_configured": true, 00:17:32.403 "data_offset": 256, 00:17:32.403 "data_size": 7936 00:17:32.403 } 00:17:32.403 ] 00:17:32.403 }' 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.403 15:26:38 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.999 [2024-11-10 15:26:39.115313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:32.999 [2024-11-10 15:26:39.115423] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:32.999 [2024-11-10 15:26:39.137105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:32.999 [2024-11-10 15:26:39.137159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:32.999 [2024-11-10 15:26:39.137169] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state offline 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.999 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 100192 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 100192 ']' 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 100192 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100192 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.000 killing process with pid 100192 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100192' 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 100192 00:17:33.000 [2024-11-10 15:26:39.234794] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.000 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 100192 00:17:33.000 [2024-11-10 15:26:39.236399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:33.260 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:33.260 00:17:33.260 real 0m4.024s 00:17:33.260 user 0m6.152s 00:17:33.260 sys 0m0.948s 00:17:33.260 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:33.260 15:26:39 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.260 ************************************ 00:17:33.260 END TEST raid_state_function_test_sb_md_interleaved 00:17:33.260 ************************************ 00:17:33.520 15:26:39 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:33.520 15:26:39 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:33.520 15:26:39 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:33.520 15:26:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:33.520 ************************************ 00:17:33.520 START TEST raid_superblock_test_md_interleaved 00:17:33.520 ************************************ 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1127 -- # raid_superblock_test raid1 2 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=100428 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 100428 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 100428 ']' 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:33.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:33.520 15:26:39 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.520 [2024-11-10 15:26:39.753220] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:33.520 [2024-11-10 15:26:39.753392] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100428 ] 00:17:33.780 [2024-11-10 15:26:39.892848] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:33.780 [2024-11-10 15:26:39.930219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.780 [2024-11-10 15:26:39.971569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:33.780 [2024-11-10 15:26:40.048669] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:33.780 [2024-11-10 15:26:40.048708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.349 malloc1 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.349 [2024-11-10 15:26:40.596627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:34.349 [2024-11-10 15:26:40.596687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.349 [2024-11-10 15:26:40.596713] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:34.349 [2024-11-10 15:26:40.596722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.349 [2024-11-10 15:26:40.598892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.349 [2024-11-10 15:26:40.598932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:34.349 pt1 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.349 malloc2 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.349 [2024-11-10 15:26:40.631715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:34.349 [2024-11-10 15:26:40.631763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.349 [2024-11-10 15:26:40.631783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:34.349 [2024-11-10 15:26:40.631791] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.349 [2024-11-10 15:26:40.633924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.349 [2024-11-10 15:26:40.633956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:34.349 pt2 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.349 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.350 [2024-11-10 15:26:40.643732] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:34.350 [2024-11-10 15:26:40.645824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:34.350 [2024-11-10 15:26:40.645979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:34.350 [2024-11-10 15:26:40.645992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:34.350 [2024-11-10 15:26:40.646082] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:34.350 [2024-11-10 15:26:40.646166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:34.350 [2024-11-10 15:26:40.646187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:34.350 [2024-11-10 15:26:40.646286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.350 "name": "raid_bdev1", 00:17:34.350 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:34.350 "strip_size_kb": 0, 00:17:34.350 "state": "online", 00:17:34.350 "raid_level": "raid1", 00:17:34.350 "superblock": true, 00:17:34.350 "num_base_bdevs": 2, 00:17:34.350 "num_base_bdevs_discovered": 2, 00:17:34.350 "num_base_bdevs_operational": 2, 00:17:34.350 "base_bdevs_list": [ 00:17:34.350 { 00:17:34.350 "name": "pt1", 00:17:34.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.350 "is_configured": true, 00:17:34.350 "data_offset": 256, 00:17:34.350 "data_size": 7936 00:17:34.350 }, 00:17:34.350 { 00:17:34.350 "name": "pt2", 00:17:34.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.350 "is_configured": true, 00:17:34.350 "data_offset": 256, 00:17:34.350 "data_size": 7936 00:17:34.350 } 00:17:34.350 ] 00:17:34.350 }' 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.350 15:26:40 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:34.920 [2024-11-10 15:26:41.116157] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:34.920 "name": "raid_bdev1", 00:17:34.920 "aliases": [ 00:17:34.920 "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3" 00:17:34.920 ], 00:17:34.920 "product_name": "Raid Volume", 00:17:34.920 "block_size": 4128, 00:17:34.920 "num_blocks": 7936, 00:17:34.920 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:34.920 "md_size": 32, 00:17:34.920 "md_interleave": true, 00:17:34.920 "dif_type": 0, 00:17:34.920 "assigned_rate_limits": { 00:17:34.920 "rw_ios_per_sec": 0, 00:17:34.920 "rw_mbytes_per_sec": 0, 00:17:34.920 "r_mbytes_per_sec": 0, 00:17:34.920 "w_mbytes_per_sec": 0 00:17:34.920 }, 00:17:34.920 "claimed": false, 00:17:34.920 "zoned": false, 00:17:34.920 "supported_io_types": { 00:17:34.920 "read": true, 00:17:34.920 "write": true, 00:17:34.920 "unmap": false, 00:17:34.920 "flush": false, 00:17:34.920 "reset": true, 00:17:34.920 "nvme_admin": false, 00:17:34.920 "nvme_io": false, 00:17:34.920 "nvme_io_md": false, 00:17:34.920 "write_zeroes": true, 00:17:34.920 "zcopy": false, 00:17:34.920 "get_zone_info": false, 00:17:34.920 "zone_management": false, 00:17:34.920 "zone_append": false, 00:17:34.920 "compare": false, 00:17:34.920 "compare_and_write": false, 00:17:34.920 "abort": false, 00:17:34.920 "seek_hole": false, 00:17:34.920 "seek_data": false, 00:17:34.920 "copy": false, 00:17:34.920 "nvme_iov_md": false 00:17:34.920 }, 00:17:34.920 "memory_domains": [ 00:17:34.920 { 00:17:34.920 "dma_device_id": "system", 00:17:34.920 "dma_device_type": 1 00:17:34.920 }, 00:17:34.920 { 00:17:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.920 "dma_device_type": 2 00:17:34.920 }, 00:17:34.920 { 00:17:34.920 "dma_device_id": "system", 00:17:34.920 "dma_device_type": 1 00:17:34.920 }, 00:17:34.920 { 00:17:34.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.920 "dma_device_type": 2 00:17:34.920 } 00:17:34.920 ], 00:17:34.920 "driver_specific": { 00:17:34.920 "raid": { 00:17:34.920 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:34.920 "strip_size_kb": 0, 00:17:34.920 "state": "online", 00:17:34.920 "raid_level": "raid1", 00:17:34.920 "superblock": true, 00:17:34.920 "num_base_bdevs": 2, 00:17:34.920 "num_base_bdevs_discovered": 2, 00:17:34.920 "num_base_bdevs_operational": 2, 00:17:34.920 "base_bdevs_list": [ 00:17:34.920 { 00:17:34.920 "name": "pt1", 00:17:34.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:34.920 "is_configured": true, 00:17:34.920 "data_offset": 256, 00:17:34.920 "data_size": 7936 00:17:34.920 }, 00:17:34.920 { 00:17:34.920 "name": "pt2", 00:17:34.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:34.920 "is_configured": true, 00:17:34.920 "data_offset": 256, 00:17:34.920 "data_size": 7936 00:17:34.920 } 00:17:34.920 ] 00:17:34.920 } 00:17:34.920 } 00:17:34.920 }' 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:34.920 pt2' 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.920 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:35.181 [2024-11-10 15:26:41.344115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3 ']' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 [2024-11-10 15:26:41.391893] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.181 [2024-11-10 15:26:41.391914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:35.181 [2024-11-10 15:26:41.391991] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:35.181 [2024-11-10 15:26:41.392060] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:35.181 [2024-11-10 15:26:41.392080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.181 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.181 [2024-11-10 15:26:41.531931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:35.181 [2024-11-10 15:26:41.534043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:35.181 [2024-11-10 15:26:41.534101] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:35.181 [2024-11-10 15:26:41.534141] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:35.181 [2024-11-10 15:26:41.534154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:35.181 [2024-11-10 15:26:41.534163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state configuring 00:17:35.181 request: 00:17:35.181 { 00:17:35.181 "name": "raid_bdev1", 00:17:35.181 "raid_level": "raid1", 00:17:35.181 "base_bdevs": [ 00:17:35.181 "malloc1", 00:17:35.181 "malloc2" 00:17:35.181 ], 00:17:35.181 "superblock": false, 00:17:35.181 "method": "bdev_raid_create", 00:17:35.181 "req_id": 1 00:17:35.182 } 00:17:35.182 Got JSON-RPC error response 00:17:35.182 response: 00:17:35.182 { 00:17:35.182 "code": -17, 00:17:35.182 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:35.182 } 00:17:35.182 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:35.182 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:35.182 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:35.182 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:35.182 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.442 [2024-11-10 15:26:41.599921] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.442 [2024-11-10 15:26:41.599969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.442 [2024-11-10 15:26:41.599987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:35.442 [2024-11-10 15:26:41.600001] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.442 [2024-11-10 15:26:41.602112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.442 [2024-11-10 15:26:41.602145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.442 [2024-11-10 15:26:41.602183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:35.442 [2024-11-10 15:26:41.602221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:35.442 pt1 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.442 "name": "raid_bdev1", 00:17:35.442 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:35.442 "strip_size_kb": 0, 00:17:35.442 "state": "configuring", 00:17:35.442 "raid_level": "raid1", 00:17:35.442 "superblock": true, 00:17:35.442 "num_base_bdevs": 2, 00:17:35.442 "num_base_bdevs_discovered": 1, 00:17:35.442 "num_base_bdevs_operational": 2, 00:17:35.442 "base_bdevs_list": [ 00:17:35.442 { 00:17:35.442 "name": "pt1", 00:17:35.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.442 "is_configured": true, 00:17:35.442 "data_offset": 256, 00:17:35.442 "data_size": 7936 00:17:35.442 }, 00:17:35.442 { 00:17:35.442 "name": null, 00:17:35.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.442 "is_configured": false, 00:17:35.442 "data_offset": 256, 00:17:35.442 "data_size": 7936 00:17:35.442 } 00:17:35.442 ] 00:17:35.442 }' 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.442 15:26:41 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.702 [2024-11-10 15:26:42.044037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:35.702 [2024-11-10 15:26:42.044088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.702 [2024-11-10 15:26:42.044105] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:35.702 [2024-11-10 15:26:42.044115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.702 [2024-11-10 15:26:42.044214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.702 [2024-11-10 15:26:42.044227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:35.702 [2024-11-10 15:26:42.044261] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:35.702 [2024-11-10 15:26:42.044278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:35.702 [2024-11-10 15:26:42.044342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:35.702 [2024-11-10 15:26:42.044352] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:35.702 [2024-11-10 15:26:42.044415] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:35.702 [2024-11-10 15:26:42.044514] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:35.702 [2024-11-10 15:26:42.044522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:35.702 [2024-11-10 15:26:42.044574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.702 pt2 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.702 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.703 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.963 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.963 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.963 "name": "raid_bdev1", 00:17:35.963 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:35.963 "strip_size_kb": 0, 00:17:35.963 "state": "online", 00:17:35.963 "raid_level": "raid1", 00:17:35.963 "superblock": true, 00:17:35.963 "num_base_bdevs": 2, 00:17:35.963 "num_base_bdevs_discovered": 2, 00:17:35.963 "num_base_bdevs_operational": 2, 00:17:35.963 "base_bdevs_list": [ 00:17:35.963 { 00:17:35.963 "name": "pt1", 00:17:35.963 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:35.963 "is_configured": true, 00:17:35.963 "data_offset": 256, 00:17:35.963 "data_size": 7936 00:17:35.963 }, 00:17:35.963 { 00:17:35.963 "name": "pt2", 00:17:35.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:35.963 "is_configured": true, 00:17:35.963 "data_offset": 256, 00:17:35.963 "data_size": 7936 00:17:35.963 } 00:17:35.963 ] 00:17:35.963 }' 00:17:35.963 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.963 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.223 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.223 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.224 [2024-11-10 15:26:42.484395] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.224 "name": "raid_bdev1", 00:17:36.224 "aliases": [ 00:17:36.224 "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3" 00:17:36.224 ], 00:17:36.224 "product_name": "Raid Volume", 00:17:36.224 "block_size": 4128, 00:17:36.224 "num_blocks": 7936, 00:17:36.224 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:36.224 "md_size": 32, 00:17:36.224 "md_interleave": true, 00:17:36.224 "dif_type": 0, 00:17:36.224 "assigned_rate_limits": { 00:17:36.224 "rw_ios_per_sec": 0, 00:17:36.224 "rw_mbytes_per_sec": 0, 00:17:36.224 "r_mbytes_per_sec": 0, 00:17:36.224 "w_mbytes_per_sec": 0 00:17:36.224 }, 00:17:36.224 "claimed": false, 00:17:36.224 "zoned": false, 00:17:36.224 "supported_io_types": { 00:17:36.224 "read": true, 00:17:36.224 "write": true, 00:17:36.224 "unmap": false, 00:17:36.224 "flush": false, 00:17:36.224 "reset": true, 00:17:36.224 "nvme_admin": false, 00:17:36.224 "nvme_io": false, 00:17:36.224 "nvme_io_md": false, 00:17:36.224 "write_zeroes": true, 00:17:36.224 "zcopy": false, 00:17:36.224 "get_zone_info": false, 00:17:36.224 "zone_management": false, 00:17:36.224 "zone_append": false, 00:17:36.224 "compare": false, 00:17:36.224 "compare_and_write": false, 00:17:36.224 "abort": false, 00:17:36.224 "seek_hole": false, 00:17:36.224 "seek_data": false, 00:17:36.224 "copy": false, 00:17:36.224 "nvme_iov_md": false 00:17:36.224 }, 00:17:36.224 "memory_domains": [ 00:17:36.224 { 00:17:36.224 "dma_device_id": "system", 00:17:36.224 "dma_device_type": 1 00:17:36.224 }, 00:17:36.224 { 00:17:36.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.224 "dma_device_type": 2 00:17:36.224 }, 00:17:36.224 { 00:17:36.224 "dma_device_id": "system", 00:17:36.224 "dma_device_type": 1 00:17:36.224 }, 00:17:36.224 { 00:17:36.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.224 "dma_device_type": 2 00:17:36.224 } 00:17:36.224 ], 00:17:36.224 "driver_specific": { 00:17:36.224 "raid": { 00:17:36.224 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:36.224 "strip_size_kb": 0, 00:17:36.224 "state": "online", 00:17:36.224 "raid_level": "raid1", 00:17:36.224 "superblock": true, 00:17:36.224 "num_base_bdevs": 2, 00:17:36.224 "num_base_bdevs_discovered": 2, 00:17:36.224 "num_base_bdevs_operational": 2, 00:17:36.224 "base_bdevs_list": [ 00:17:36.224 { 00:17:36.224 "name": "pt1", 00:17:36.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.224 "is_configured": true, 00:17:36.224 "data_offset": 256, 00:17:36.224 "data_size": 7936 00:17:36.224 }, 00:17:36.224 { 00:17:36.224 "name": "pt2", 00:17:36.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.224 "is_configured": true, 00:17:36.224 "data_offset": 256, 00:17:36.224 "data_size": 7936 00:17:36.224 } 00:17:36.224 ] 00:17:36.224 } 00:17:36.224 } 00:17:36.224 }' 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:36.224 pt2' 00:17:36.224 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.484 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 [2024-11-10 15:26:42.700438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3 '!=' 4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3 ']' 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 [2024-11-10 15:26:42.748229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.485 "name": "raid_bdev1", 00:17:36.485 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:36.485 "strip_size_kb": 0, 00:17:36.485 "state": "online", 00:17:36.485 "raid_level": "raid1", 00:17:36.485 "superblock": true, 00:17:36.485 "num_base_bdevs": 2, 00:17:36.485 "num_base_bdevs_discovered": 1, 00:17:36.485 "num_base_bdevs_operational": 1, 00:17:36.485 "base_bdevs_list": [ 00:17:36.485 { 00:17:36.485 "name": null, 00:17:36.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:36.485 "is_configured": false, 00:17:36.485 "data_offset": 0, 00:17:36.485 "data_size": 7936 00:17:36.485 }, 00:17:36.485 { 00:17:36.485 "name": "pt2", 00:17:36.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.485 "is_configured": true, 00:17:36.485 "data_offset": 256, 00:17:36.485 "data_size": 7936 00:17:36.485 } 00:17:36.485 ] 00:17:36.485 }' 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.485 15:26:42 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.055 [2024-11-10 15:26:43.252339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.055 [2024-11-10 15:26:43.252412] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.055 [2024-11-10 15:26:43.252497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.055 [2024-11-10 15:26:43.252549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.055 [2024-11-10 15:26:43.252599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.055 [2024-11-10 15:26:43.328365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.055 [2024-11-10 15:26:43.328413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.055 [2024-11-10 15:26:43.328426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:37.055 [2024-11-10 15:26:43.328436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.055 [2024-11-10 15:26:43.330619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.055 [2024-11-10 15:26:43.330659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.055 [2024-11-10 15:26:43.330699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:37.055 [2024-11-10 15:26:43.330733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.055 [2024-11-10 15:26:43.330791] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.055 [2024-11-10 15:26:43.330800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:37.055 [2024-11-10 15:26:43.330889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:37.055 [2024-11-10 15:26:43.330948] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.055 [2024-11-10 15:26:43.330955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:37.055 [2024-11-10 15:26:43.331002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.055 pt2 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.055 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.055 "name": "raid_bdev1", 00:17:37.055 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:37.055 "strip_size_kb": 0, 00:17:37.055 "state": "online", 00:17:37.055 "raid_level": "raid1", 00:17:37.055 "superblock": true, 00:17:37.055 "num_base_bdevs": 2, 00:17:37.055 "num_base_bdevs_discovered": 1, 00:17:37.055 "num_base_bdevs_operational": 1, 00:17:37.055 "base_bdevs_list": [ 00:17:37.055 { 00:17:37.055 "name": null, 00:17:37.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.055 "is_configured": false, 00:17:37.055 "data_offset": 256, 00:17:37.055 "data_size": 7936 00:17:37.055 }, 00:17:37.055 { 00:17:37.055 "name": "pt2", 00:17:37.055 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.055 "is_configured": true, 00:17:37.055 "data_offset": 256, 00:17:37.055 "data_size": 7936 00:17:37.055 } 00:17:37.055 ] 00:17:37.056 }' 00:17:37.056 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.056 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.625 [2024-11-10 15:26:43.796486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.625 [2024-11-10 15:26:43.796552] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.625 [2024-11-10 15:26:43.796634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.625 [2024-11-10 15:26:43.796687] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.625 [2024-11-10 15:26:43.796735] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:37.625 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.626 [2024-11-10 15:26:43.860512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:37.626 [2024-11-10 15:26:43.860596] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.626 [2024-11-10 15:26:43.860631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:37.626 [2024-11-10 15:26:43.860657] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.626 [2024-11-10 15:26:43.862797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.626 [2024-11-10 15:26:43.862878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:37.626 [2024-11-10 15:26:43.862937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:37.626 [2024-11-10 15:26:43.862979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:37.626 [2024-11-10 15:26:43.863103] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:37.626 [2024-11-10 15:26:43.863159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.626 [2024-11-10 15:26:43.863197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state configuring 00:17:37.626 [2024-11-10 15:26:43.863286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.626 [2024-11-10 15:26:43.863373] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:17:37.626 [2024-11-10 15:26:43.863409] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:37.626 [2024-11-10 15:26:43.863491] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:37.626 [2024-11-10 15:26:43.863574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:17:37.626 [2024-11-10 15:26:43.863615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:17:37.626 [2024-11-10 15:26:43.863715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.626 pt1 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.626 "name": "raid_bdev1", 00:17:37.626 "uuid": "4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3", 00:17:37.626 "strip_size_kb": 0, 00:17:37.626 "state": "online", 00:17:37.626 "raid_level": "raid1", 00:17:37.626 "superblock": true, 00:17:37.626 "num_base_bdevs": 2, 00:17:37.626 "num_base_bdevs_discovered": 1, 00:17:37.626 "num_base_bdevs_operational": 1, 00:17:37.626 "base_bdevs_list": [ 00:17:37.626 { 00:17:37.626 "name": null, 00:17:37.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.626 "is_configured": false, 00:17:37.626 "data_offset": 256, 00:17:37.626 "data_size": 7936 00:17:37.626 }, 00:17:37.626 { 00:17:37.626 "name": "pt2", 00:17:37.626 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.626 "is_configured": true, 00:17:37.626 "data_offset": 256, 00:17:37.626 "data_size": 7936 00:17:37.626 } 00:17:37.626 ] 00:17:37.626 }' 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.626 15:26:43 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:38.195 [2024-11-10 15:26:44.388859] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3 '!=' 4e226b30-bc53-4cb1-8f4a-aeb5b0981fa3 ']' 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 100428 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 100428 ']' 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 100428 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100428 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100428' 00:17:38.195 killing process with pid 100428 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@971 -- # kill 100428 00:17:38.195 [2024-11-10 15:26:44.475394] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.195 [2024-11-10 15:26:44.475476] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.195 [2024-11-10 15:26:44.475512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.195 [2024-11-10 15:26:44.475523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:17:38.195 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@976 -- # wait 100428 00:17:38.195 [2024-11-10 15:26:44.518292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:38.765 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:38.765 00:17:38.765 real 0m5.196s 00:17:38.765 user 0m8.295s 00:17:38.765 sys 0m1.220s 00:17:38.765 ************************************ 00:17:38.765 END TEST raid_superblock_test_md_interleaved 00:17:38.765 ************************************ 00:17:38.765 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:38.765 15:26:44 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.765 15:26:44 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:38.765 15:26:44 bdev_raid -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:17:38.765 15:26:44 bdev_raid -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:38.765 15:26:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:38.765 ************************************ 00:17:38.765 START TEST raid_rebuild_test_sb_md_interleaved 00:17:38.765 ************************************ 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1127 -- # raid_rebuild_test raid1 2 true false false 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:38.765 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=100747 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 100747 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # '[' -z 100747 ']' 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:38.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:38.766 15:26:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.766 [2024-11-10 15:26:45.037181] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:38.766 [2024-11-10 15:26:45.037372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:38.766 Zero copy mechanism will not be used. 00:17:38.766 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100747 ] 00:17:39.026 [2024-11-10 15:26:45.171880] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:39.026 [2024-11-10 15:26:45.213255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.026 [2024-11-10 15:26:45.254157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.026 [2024-11-10 15:26:45.331214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.026 [2024-11-10 15:26:45.331353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@866 -- # return 0 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.596 BaseBdev1_malloc 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.596 [2024-11-10 15:26:45.898536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.596 [2024-11-10 15:26:45.898601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.596 [2024-11-10 15:26:45.898627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:39.596 [2024-11-10 15:26:45.898644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.596 [2024-11-10 15:26:45.900857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.596 [2024-11-10 15:26:45.900894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.596 BaseBdev1 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.596 BaseBdev2_malloc 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.596 [2024-11-10 15:26:45.933436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:39.596 [2024-11-10 15:26:45.933492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.596 [2024-11-10 15:26:45.933513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:39.596 [2024-11-10 15:26:45.933525] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.596 [2024-11-10 15:26:45.935674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.596 [2024-11-10 15:26:45.935711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:39.596 BaseBdev2 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.596 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 spare_malloc 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 spare_delay 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 [2024-11-10 15:26:45.980504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.856 [2024-11-10 15:26:45.980580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.856 [2024-11-10 15:26:45.980604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:39.856 [2024-11-10 15:26:45.980615] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.856 [2024-11-10 15:26:45.982788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.856 [2024-11-10 15:26:45.982825] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.856 spare 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.856 [2024-11-10 15:26:45.992569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:39.856 [2024-11-10 15:26:45.994696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.856 [2024-11-10 15:26:45.994855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007400 00:17:39.856 [2024-11-10 15:26:45.994870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:39.856 [2024-11-10 15:26:45.994945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:39.856 [2024-11-10 15:26:45.995038] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007400 00:17:39.856 [2024-11-10 15:26:45.995047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007400 00:17:39.856 [2024-11-10 15:26:45.995119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.856 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.857 15:26:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.857 "name": "raid_bdev1", 00:17:39.857 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:39.857 "strip_size_kb": 0, 00:17:39.857 "state": "online", 00:17:39.857 "raid_level": "raid1", 00:17:39.857 "superblock": true, 00:17:39.857 "num_base_bdevs": 2, 00:17:39.857 "num_base_bdevs_discovered": 2, 00:17:39.857 "num_base_bdevs_operational": 2, 00:17:39.857 "base_bdevs_list": [ 00:17:39.857 { 00:17:39.857 "name": "BaseBdev1", 00:17:39.857 "uuid": "95270727-5eb4-5eb1-995d-39d27c3a5252", 00:17:39.857 "is_configured": true, 00:17:39.857 "data_offset": 256, 00:17:39.857 "data_size": 7936 00:17:39.857 }, 00:17:39.857 { 00:17:39.857 "name": "BaseBdev2", 00:17:39.857 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:39.857 "is_configured": true, 00:17:39.857 "data_offset": 256, 00:17:39.857 "data_size": 7936 00:17:39.857 } 00:17:39.857 ] 00:17:39.857 }' 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.857 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.116 [2024-11-10 15:26:46.432936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:40.116 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.376 [2024-11-10 15:26:46.524655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.376 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.377 "name": "raid_bdev1", 00:17:40.377 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:40.377 "strip_size_kb": 0, 00:17:40.377 "state": "online", 00:17:40.377 "raid_level": "raid1", 00:17:40.377 "superblock": true, 00:17:40.377 "num_base_bdevs": 2, 00:17:40.377 "num_base_bdevs_discovered": 1, 00:17:40.377 "num_base_bdevs_operational": 1, 00:17:40.377 "base_bdevs_list": [ 00:17:40.377 { 00:17:40.377 "name": null, 00:17:40.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.377 "is_configured": false, 00:17:40.377 "data_offset": 0, 00:17:40.377 "data_size": 7936 00:17:40.377 }, 00:17:40.377 { 00:17:40.377 "name": "BaseBdev2", 00:17:40.377 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:40.377 "is_configured": true, 00:17:40.377 "data_offset": 256, 00:17:40.377 "data_size": 7936 00:17:40.377 } 00:17:40.377 ] 00:17:40.377 }' 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.377 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.637 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.637 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.637 15:26:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.637 [2024-11-10 15:26:46.988806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.897 [2024-11-10 15:26:47.005688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:17:40.897 15:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.897 15:26:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:40.897 [2024-11-10 15:26:47.012299] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.836 "name": "raid_bdev1", 00:17:41.836 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:41.836 "strip_size_kb": 0, 00:17:41.836 "state": "online", 00:17:41.836 "raid_level": "raid1", 00:17:41.836 "superblock": true, 00:17:41.836 "num_base_bdevs": 2, 00:17:41.836 "num_base_bdevs_discovered": 2, 00:17:41.836 "num_base_bdevs_operational": 2, 00:17:41.836 "process": { 00:17:41.836 "type": "rebuild", 00:17:41.836 "target": "spare", 00:17:41.836 "progress": { 00:17:41.836 "blocks": 2560, 00:17:41.836 "percent": 32 00:17:41.836 } 00:17:41.836 }, 00:17:41.836 "base_bdevs_list": [ 00:17:41.836 { 00:17:41.836 "name": "spare", 00:17:41.836 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:41.836 "is_configured": true, 00:17:41.836 "data_offset": 256, 00:17:41.836 "data_size": 7936 00:17:41.836 }, 00:17:41.836 { 00:17:41.836 "name": "BaseBdev2", 00:17:41.836 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:41.836 "is_configured": true, 00:17:41.836 "data_offset": 256, 00:17:41.836 "data_size": 7936 00:17:41.836 } 00:17:41.836 ] 00:17:41.836 }' 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.836 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.836 [2024-11-10 15:26:48.174230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.096 [2024-11-10 15:26:48.223856] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.096 [2024-11-10 15:26:48.223927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.096 [2024-11-10 15:26:48.223942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.096 [2024-11-10 15:26:48.223956] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.096 "name": "raid_bdev1", 00:17:42.096 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:42.096 "strip_size_kb": 0, 00:17:42.096 "state": "online", 00:17:42.096 "raid_level": "raid1", 00:17:42.096 "superblock": true, 00:17:42.096 "num_base_bdevs": 2, 00:17:42.096 "num_base_bdevs_discovered": 1, 00:17:42.096 "num_base_bdevs_operational": 1, 00:17:42.096 "base_bdevs_list": [ 00:17:42.096 { 00:17:42.096 "name": null, 00:17:42.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.096 "is_configured": false, 00:17:42.096 "data_offset": 0, 00:17:42.096 "data_size": 7936 00:17:42.096 }, 00:17:42.096 { 00:17:42.096 "name": "BaseBdev2", 00:17:42.096 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:42.096 "is_configured": true, 00:17:42.096 "data_offset": 256, 00:17:42.096 "data_size": 7936 00:17:42.096 } 00:17:42.096 ] 00:17:42.096 }' 00:17:42.096 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.097 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.356 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.616 "name": "raid_bdev1", 00:17:42.616 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:42.616 "strip_size_kb": 0, 00:17:42.616 "state": "online", 00:17:42.616 "raid_level": "raid1", 00:17:42.616 "superblock": true, 00:17:42.616 "num_base_bdevs": 2, 00:17:42.616 "num_base_bdevs_discovered": 1, 00:17:42.616 "num_base_bdevs_operational": 1, 00:17:42.616 "base_bdevs_list": [ 00:17:42.616 { 00:17:42.616 "name": null, 00:17:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.616 "is_configured": false, 00:17:42.616 "data_offset": 0, 00:17:42.616 "data_size": 7936 00:17:42.616 }, 00:17:42.616 { 00:17:42.616 "name": "BaseBdev2", 00:17:42.616 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:42.616 "is_configured": true, 00:17:42.616 "data_offset": 256, 00:17:42.616 "data_size": 7936 00:17:42.616 } 00:17:42.616 ] 00:17:42.616 }' 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.616 [2024-11-10 15:26:48.802465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.616 [2024-11-10 15:26:48.807732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.616 15:26:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:42.616 [2024-11-10 15:26:48.809953] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.555 "name": "raid_bdev1", 00:17:43.555 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:43.555 "strip_size_kb": 0, 00:17:43.555 "state": "online", 00:17:43.555 "raid_level": "raid1", 00:17:43.555 "superblock": true, 00:17:43.555 "num_base_bdevs": 2, 00:17:43.555 "num_base_bdevs_discovered": 2, 00:17:43.555 "num_base_bdevs_operational": 2, 00:17:43.555 "process": { 00:17:43.555 "type": "rebuild", 00:17:43.555 "target": "spare", 00:17:43.555 "progress": { 00:17:43.555 "blocks": 2560, 00:17:43.555 "percent": 32 00:17:43.555 } 00:17:43.555 }, 00:17:43.555 "base_bdevs_list": [ 00:17:43.555 { 00:17:43.555 "name": "spare", 00:17:43.555 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:43.555 "is_configured": true, 00:17:43.555 "data_offset": 256, 00:17:43.555 "data_size": 7936 00:17:43.555 }, 00:17:43.555 { 00:17:43.555 "name": "BaseBdev2", 00:17:43.555 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:43.555 "is_configured": true, 00:17:43.555 "data_offset": 256, 00:17:43.555 "data_size": 7936 00:17:43.555 } 00:17:43.555 ] 00:17:43.555 }' 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.555 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:43.815 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=623 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.815 "name": "raid_bdev1", 00:17:43.815 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:43.815 "strip_size_kb": 0, 00:17:43.815 "state": "online", 00:17:43.815 "raid_level": "raid1", 00:17:43.815 "superblock": true, 00:17:43.815 "num_base_bdevs": 2, 00:17:43.815 "num_base_bdevs_discovered": 2, 00:17:43.815 "num_base_bdevs_operational": 2, 00:17:43.815 "process": { 00:17:43.815 "type": "rebuild", 00:17:43.815 "target": "spare", 00:17:43.815 "progress": { 00:17:43.815 "blocks": 2816, 00:17:43.815 "percent": 35 00:17:43.815 } 00:17:43.815 }, 00:17:43.815 "base_bdevs_list": [ 00:17:43.815 { 00:17:43.815 "name": "spare", 00:17:43.815 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:43.815 "is_configured": true, 00:17:43.815 "data_offset": 256, 00:17:43.815 "data_size": 7936 00:17:43.815 }, 00:17:43.815 { 00:17:43.815 "name": "BaseBdev2", 00:17:43.815 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:43.815 "is_configured": true, 00:17:43.815 "data_offset": 256, 00:17:43.815 "data_size": 7936 00:17:43.815 } 00:17:43.815 ] 00:17:43.815 }' 00:17:43.815 15:26:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.815 15:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.815 15:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.815 15:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.815 15:26:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.754 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.014 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.014 "name": "raid_bdev1", 00:17:45.014 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:45.014 "strip_size_kb": 0, 00:17:45.014 "state": "online", 00:17:45.014 "raid_level": "raid1", 00:17:45.014 "superblock": true, 00:17:45.014 "num_base_bdevs": 2, 00:17:45.014 "num_base_bdevs_discovered": 2, 00:17:45.014 "num_base_bdevs_operational": 2, 00:17:45.014 "process": { 00:17:45.014 "type": "rebuild", 00:17:45.014 "target": "spare", 00:17:45.014 "progress": { 00:17:45.014 "blocks": 5632, 00:17:45.014 "percent": 70 00:17:45.014 } 00:17:45.014 }, 00:17:45.014 "base_bdevs_list": [ 00:17:45.014 { 00:17:45.014 "name": "spare", 00:17:45.014 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:45.014 "is_configured": true, 00:17:45.014 "data_offset": 256, 00:17:45.014 "data_size": 7936 00:17:45.014 }, 00:17:45.014 { 00:17:45.014 "name": "BaseBdev2", 00:17:45.014 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:45.014 "is_configured": true, 00:17:45.014 "data_offset": 256, 00:17:45.014 "data_size": 7936 00:17:45.014 } 00:17:45.014 ] 00:17:45.014 }' 00:17:45.014 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.014 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.014 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.014 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.014 15:26:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:45.583 [2024-11-10 15:26:51.935215] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:45.583 [2024-11-10 15:26:51.935355] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:45.583 [2024-11-10 15:26:51.935543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.152 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.153 "name": "raid_bdev1", 00:17:46.153 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:46.153 "strip_size_kb": 0, 00:17:46.153 "state": "online", 00:17:46.153 "raid_level": "raid1", 00:17:46.153 "superblock": true, 00:17:46.153 "num_base_bdevs": 2, 00:17:46.153 "num_base_bdevs_discovered": 2, 00:17:46.153 "num_base_bdevs_operational": 2, 00:17:46.153 "base_bdevs_list": [ 00:17:46.153 { 00:17:46.153 "name": "spare", 00:17:46.153 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:46.153 "is_configured": true, 00:17:46.153 "data_offset": 256, 00:17:46.153 "data_size": 7936 00:17:46.153 }, 00:17:46.153 { 00:17:46.153 "name": "BaseBdev2", 00:17:46.153 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:46.153 "is_configured": true, 00:17:46.153 "data_offset": 256, 00:17:46.153 "data_size": 7936 00:17:46.153 } 00:17:46.153 ] 00:17:46.153 }' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.153 "name": "raid_bdev1", 00:17:46.153 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:46.153 "strip_size_kb": 0, 00:17:46.153 "state": "online", 00:17:46.153 "raid_level": "raid1", 00:17:46.153 "superblock": true, 00:17:46.153 "num_base_bdevs": 2, 00:17:46.153 "num_base_bdevs_discovered": 2, 00:17:46.153 "num_base_bdevs_operational": 2, 00:17:46.153 "base_bdevs_list": [ 00:17:46.153 { 00:17:46.153 "name": "spare", 00:17:46.153 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:46.153 "is_configured": true, 00:17:46.153 "data_offset": 256, 00:17:46.153 "data_size": 7936 00:17:46.153 }, 00:17:46.153 { 00:17:46.153 "name": "BaseBdev2", 00:17:46.153 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:46.153 "is_configured": true, 00:17:46.153 "data_offset": 256, 00:17:46.153 "data_size": 7936 00:17:46.153 } 00:17:46.153 ] 00:17:46.153 }' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.153 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.413 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.413 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.413 "name": "raid_bdev1", 00:17:46.413 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:46.413 "strip_size_kb": 0, 00:17:46.413 "state": "online", 00:17:46.413 "raid_level": "raid1", 00:17:46.413 "superblock": true, 00:17:46.413 "num_base_bdevs": 2, 00:17:46.413 "num_base_bdevs_discovered": 2, 00:17:46.413 "num_base_bdevs_operational": 2, 00:17:46.413 "base_bdevs_list": [ 00:17:46.413 { 00:17:46.413 "name": "spare", 00:17:46.413 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:46.413 "is_configured": true, 00:17:46.413 "data_offset": 256, 00:17:46.413 "data_size": 7936 00:17:46.413 }, 00:17:46.413 { 00:17:46.413 "name": "BaseBdev2", 00:17:46.413 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:46.413 "is_configured": true, 00:17:46.413 "data_offset": 256, 00:17:46.413 "data_size": 7936 00:17:46.413 } 00:17:46.413 ] 00:17:46.413 }' 00:17:46.413 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.413 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.673 [2024-11-10 15:26:52.965763] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.673 [2024-11-10 15:26:52.965796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:46.673 [2024-11-10 15:26:52.965900] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:46.673 [2024-11-10 15:26:52.965969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:46.673 [2024-11-10 15:26:52.965978] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007400 name raid_bdev1, state offline 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:46.673 15:26:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.673 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.933 [2024-11-10 15:26:53.037796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:46.933 [2024-11-10 15:26:53.037854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.933 [2024-11-10 15:26:53.037879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:46.933 [2024-11-10 15:26:53.037889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.933 [2024-11-10 15:26:53.040177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.933 [2024-11-10 15:26:53.040255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:46.933 [2024-11-10 15:26:53.040318] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:46.933 [2024-11-10 15:26:53.040369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:46.933 [2024-11-10 15:26:53.040478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.933 spare 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.933 [2024-11-10 15:26:53.140536] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:46.933 [2024-11-10 15:26:53.140565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:46.933 [2024-11-10 15:26:53.140649] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006490 00:17:46.933 [2024-11-10 15:26:53.140717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:46.933 [2024-11-10 15:26:53.140724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:46.933 [2024-11-10 15:26:53.140794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.933 "name": "raid_bdev1", 00:17:46.933 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:46.933 "strip_size_kb": 0, 00:17:46.933 "state": "online", 00:17:46.933 "raid_level": "raid1", 00:17:46.933 "superblock": true, 00:17:46.933 "num_base_bdevs": 2, 00:17:46.933 "num_base_bdevs_discovered": 2, 00:17:46.933 "num_base_bdevs_operational": 2, 00:17:46.933 "base_bdevs_list": [ 00:17:46.933 { 00:17:46.933 "name": "spare", 00:17:46.933 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:46.933 "is_configured": true, 00:17:46.933 "data_offset": 256, 00:17:46.933 "data_size": 7936 00:17:46.933 }, 00:17:46.933 { 00:17:46.933 "name": "BaseBdev2", 00:17:46.933 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:46.933 "is_configured": true, 00:17:46.933 "data_offset": 256, 00:17:46.933 "data_size": 7936 00:17:46.933 } 00:17:46.933 ] 00:17:46.933 }' 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.933 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.503 "name": "raid_bdev1", 00:17:47.503 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:47.503 "strip_size_kb": 0, 00:17:47.503 "state": "online", 00:17:47.503 "raid_level": "raid1", 00:17:47.503 "superblock": true, 00:17:47.503 "num_base_bdevs": 2, 00:17:47.503 "num_base_bdevs_discovered": 2, 00:17:47.503 "num_base_bdevs_operational": 2, 00:17:47.503 "base_bdevs_list": [ 00:17:47.503 { 00:17:47.503 "name": "spare", 00:17:47.503 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:47.503 "is_configured": true, 00:17:47.503 "data_offset": 256, 00:17:47.503 "data_size": 7936 00:17:47.503 }, 00:17:47.503 { 00:17:47.503 "name": "BaseBdev2", 00:17:47.503 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:47.503 "is_configured": true, 00:17:47.503 "data_offset": 256, 00:17:47.503 "data_size": 7936 00:17:47.503 } 00:17:47.503 ] 00:17:47.503 }' 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.503 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.504 [2024-11-10 15:26:53.733976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.504 "name": "raid_bdev1", 00:17:47.504 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:47.504 "strip_size_kb": 0, 00:17:47.504 "state": "online", 00:17:47.504 "raid_level": "raid1", 00:17:47.504 "superblock": true, 00:17:47.504 "num_base_bdevs": 2, 00:17:47.504 "num_base_bdevs_discovered": 1, 00:17:47.504 "num_base_bdevs_operational": 1, 00:17:47.504 "base_bdevs_list": [ 00:17:47.504 { 00:17:47.504 "name": null, 00:17:47.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.504 "is_configured": false, 00:17:47.504 "data_offset": 0, 00:17:47.504 "data_size": 7936 00:17:47.504 }, 00:17:47.504 { 00:17:47.504 "name": "BaseBdev2", 00:17:47.504 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:47.504 "is_configured": true, 00:17:47.504 "data_offset": 256, 00:17:47.504 "data_size": 7936 00:17:47.504 } 00:17:47.504 ] 00:17:47.504 }' 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.504 15:26:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.073 15:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:48.073 15:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.074 15:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.074 [2024-11-10 15:26:54.178148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.074 [2024-11-10 15:26:54.178338] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:48.074 [2024-11-10 15:26:54.178424] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:48.074 [2024-11-10 15:26:54.178480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:48.074 [2024-11-10 15:26:54.184593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:17:48.074 15:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.074 15:26:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:48.074 [2024-11-10 15:26:54.186789] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.014 "name": "raid_bdev1", 00:17:49.014 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:49.014 "strip_size_kb": 0, 00:17:49.014 "state": "online", 00:17:49.014 "raid_level": "raid1", 00:17:49.014 "superblock": true, 00:17:49.014 "num_base_bdevs": 2, 00:17:49.014 "num_base_bdevs_discovered": 2, 00:17:49.014 "num_base_bdevs_operational": 2, 00:17:49.014 "process": { 00:17:49.014 "type": "rebuild", 00:17:49.014 "target": "spare", 00:17:49.014 "progress": { 00:17:49.014 "blocks": 2560, 00:17:49.014 "percent": 32 00:17:49.014 } 00:17:49.014 }, 00:17:49.014 "base_bdevs_list": [ 00:17:49.014 { 00:17:49.014 "name": "spare", 00:17:49.014 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:49.014 "is_configured": true, 00:17:49.014 "data_offset": 256, 00:17:49.014 "data_size": 7936 00:17:49.014 }, 00:17:49.014 { 00:17:49.014 "name": "BaseBdev2", 00:17:49.014 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:49.014 "is_configured": true, 00:17:49.014 "data_offset": 256, 00:17:49.014 "data_size": 7936 00:17:49.014 } 00:17:49.014 ] 00:17:49.014 }' 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.014 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.014 [2024-11-10 15:26:55.351982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.274 [2024-11-10 15:26:55.396641] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:49.274 [2024-11-10 15:26:55.396760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.274 [2024-11-10 15:26:55.396778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.274 [2024-11-10 15:26:55.396789] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.274 "name": "raid_bdev1", 00:17:49.274 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:49.274 "strip_size_kb": 0, 00:17:49.274 "state": "online", 00:17:49.274 "raid_level": "raid1", 00:17:49.274 "superblock": true, 00:17:49.274 "num_base_bdevs": 2, 00:17:49.274 "num_base_bdevs_discovered": 1, 00:17:49.274 "num_base_bdevs_operational": 1, 00:17:49.274 "base_bdevs_list": [ 00:17:49.274 { 00:17:49.274 "name": null, 00:17:49.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.274 "is_configured": false, 00:17:49.274 "data_offset": 0, 00:17:49.274 "data_size": 7936 00:17:49.274 }, 00:17:49.274 { 00:17:49.274 "name": "BaseBdev2", 00:17:49.274 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:49.274 "is_configured": true, 00:17:49.274 "data_offset": 256, 00:17:49.274 "data_size": 7936 00:17:49.274 } 00:17:49.274 ] 00:17:49.274 }' 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.274 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.534 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.534 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.534 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.534 [2024-11-10 15:26:55.843209] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.534 [2024-11-10 15:26:55.843309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.534 [2024-11-10 15:26:55.843350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:49.534 [2024-11-10 15:26:55.843386] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.534 [2024-11-10 15:26:55.843638] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.534 [2024-11-10 15:26:55.843690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.534 [2024-11-10 15:26:55.843769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.534 [2024-11-10 15:26:55.843808] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:49.534 [2024-11-10 15:26:55.843852] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:49.534 [2024-11-10 15:26:55.843923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.534 [2024-11-10 15:26:55.848597] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006630 00:17:49.534 spare 00:17:49.534 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.534 15:26:55 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:49.534 [2024-11-10 15:26:55.850807] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.916 "name": "raid_bdev1", 00:17:50.916 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:50.916 "strip_size_kb": 0, 00:17:50.916 "state": "online", 00:17:50.916 "raid_level": "raid1", 00:17:50.916 "superblock": true, 00:17:50.916 "num_base_bdevs": 2, 00:17:50.916 "num_base_bdevs_discovered": 2, 00:17:50.916 "num_base_bdevs_operational": 2, 00:17:50.916 "process": { 00:17:50.916 "type": "rebuild", 00:17:50.916 "target": "spare", 00:17:50.916 "progress": { 00:17:50.916 "blocks": 2560, 00:17:50.916 "percent": 32 00:17:50.916 } 00:17:50.916 }, 00:17:50.916 "base_bdevs_list": [ 00:17:50.916 { 00:17:50.916 "name": "spare", 00:17:50.916 "uuid": "c1527990-f5f9-5e8f-aac8-3cf68be5204e", 00:17:50.916 "is_configured": true, 00:17:50.916 "data_offset": 256, 00:17:50.916 "data_size": 7936 00:17:50.916 }, 00:17:50.916 { 00:17:50.916 "name": "BaseBdev2", 00:17:50.916 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:50.916 "is_configured": true, 00:17:50.916 "data_offset": 256, 00:17:50.916 "data_size": 7936 00:17:50.916 } 00:17:50.916 ] 00:17:50.916 }' 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.916 15:26:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 [2024-11-10 15:26:57.018395] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.916 [2024-11-10 15:26:57.060577] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:50.916 [2024-11-10 15:26:57.060634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.916 [2024-11-10 15:26:57.060670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:50.916 [2024-11-10 15:26:57.060688] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.916 "name": "raid_bdev1", 00:17:50.916 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:50.916 "strip_size_kb": 0, 00:17:50.916 "state": "online", 00:17:50.916 "raid_level": "raid1", 00:17:50.916 "superblock": true, 00:17:50.916 "num_base_bdevs": 2, 00:17:50.916 "num_base_bdevs_discovered": 1, 00:17:50.916 "num_base_bdevs_operational": 1, 00:17:50.916 "base_bdevs_list": [ 00:17:50.916 { 00:17:50.916 "name": null, 00:17:50.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.916 "is_configured": false, 00:17:50.916 "data_offset": 0, 00:17:50.916 "data_size": 7936 00:17:50.916 }, 00:17:50.916 { 00:17:50.916 "name": "BaseBdev2", 00:17:50.916 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:50.916 "is_configured": true, 00:17:50.916 "data_offset": 256, 00:17:50.916 "data_size": 7936 00:17:50.916 } 00:17:50.916 ] 00:17:50.916 }' 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.916 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.485 "name": "raid_bdev1", 00:17:51.485 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:51.485 "strip_size_kb": 0, 00:17:51.485 "state": "online", 00:17:51.485 "raid_level": "raid1", 00:17:51.485 "superblock": true, 00:17:51.485 "num_base_bdevs": 2, 00:17:51.485 "num_base_bdevs_discovered": 1, 00:17:51.485 "num_base_bdevs_operational": 1, 00:17:51.485 "base_bdevs_list": [ 00:17:51.485 { 00:17:51.485 "name": null, 00:17:51.485 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.485 "is_configured": false, 00:17:51.485 "data_offset": 0, 00:17:51.485 "data_size": 7936 00:17:51.485 }, 00:17:51.485 { 00:17:51.485 "name": "BaseBdev2", 00:17:51.485 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:51.485 "is_configured": true, 00:17:51.485 "data_offset": 256, 00:17:51.485 "data_size": 7936 00:17:51.485 } 00:17:51.485 ] 00:17:51.485 }' 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 [2024-11-10 15:26:57.707306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:51.485 [2024-11-10 15:26:57.707358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.485 [2024-11-10 15:26:57.707379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:51.485 [2024-11-10 15:26:57.707388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.485 [2024-11-10 15:26:57.707587] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.485 [2024-11-10 15:26:57.707598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:51.485 [2024-11-10 15:26:57.707647] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:51.485 [2024-11-10 15:26:57.707660] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.485 [2024-11-10 15:26:57.707673] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:51.485 [2024-11-10 15:26:57.707684] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:51.485 BaseBdev1 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.485 15:26:57 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.424 "name": "raid_bdev1", 00:17:52.424 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:52.424 "strip_size_kb": 0, 00:17:52.424 "state": "online", 00:17:52.424 "raid_level": "raid1", 00:17:52.424 "superblock": true, 00:17:52.424 "num_base_bdevs": 2, 00:17:52.424 "num_base_bdevs_discovered": 1, 00:17:52.424 "num_base_bdevs_operational": 1, 00:17:52.424 "base_bdevs_list": [ 00:17:52.424 { 00:17:52.424 "name": null, 00:17:52.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.424 "is_configured": false, 00:17:52.424 "data_offset": 0, 00:17:52.424 "data_size": 7936 00:17:52.424 }, 00:17:52.424 { 00:17:52.424 "name": "BaseBdev2", 00:17:52.424 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:52.424 "is_configured": true, 00:17:52.424 "data_offset": 256, 00:17:52.424 "data_size": 7936 00:17:52.424 } 00:17:52.424 ] 00:17:52.424 }' 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.424 15:26:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.993 "name": "raid_bdev1", 00:17:52.993 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:52.993 "strip_size_kb": 0, 00:17:52.993 "state": "online", 00:17:52.993 "raid_level": "raid1", 00:17:52.993 "superblock": true, 00:17:52.993 "num_base_bdevs": 2, 00:17:52.993 "num_base_bdevs_discovered": 1, 00:17:52.993 "num_base_bdevs_operational": 1, 00:17:52.993 "base_bdevs_list": [ 00:17:52.993 { 00:17:52.993 "name": null, 00:17:52.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.993 "is_configured": false, 00:17:52.993 "data_offset": 0, 00:17:52.993 "data_size": 7936 00:17:52.993 }, 00:17:52.993 { 00:17:52.993 "name": "BaseBdev2", 00:17:52.993 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:52.993 "is_configured": true, 00:17:52.993 "data_offset": 256, 00:17:52.993 "data_size": 7936 00:17:52.993 } 00:17:52.993 ] 00:17:52.993 }' 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:52.993 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.994 [2024-11-10 15:26:59.339752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.994 [2024-11-10 15:26:59.339874] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:52.994 [2024-11-10 15:26:59.339889] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:52.994 request: 00:17:52.994 { 00:17:52.994 "base_bdev": "BaseBdev1", 00:17:52.994 "raid_bdev": "raid_bdev1", 00:17:52.994 "method": "bdev_raid_add_base_bdev", 00:17:52.994 "req_id": 1 00:17:52.994 } 00:17:52.994 Got JSON-RPC error response 00:17:52.994 response: 00:17:52.994 { 00:17:52.994 "code": -22, 00:17:52.994 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:52.994 } 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:52.994 15:26:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.373 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.374 "name": "raid_bdev1", 00:17:54.374 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:54.374 "strip_size_kb": 0, 00:17:54.374 "state": "online", 00:17:54.374 "raid_level": "raid1", 00:17:54.374 "superblock": true, 00:17:54.374 "num_base_bdevs": 2, 00:17:54.374 "num_base_bdevs_discovered": 1, 00:17:54.374 "num_base_bdevs_operational": 1, 00:17:54.374 "base_bdevs_list": [ 00:17:54.374 { 00:17:54.374 "name": null, 00:17:54.374 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.374 "is_configured": false, 00:17:54.374 "data_offset": 0, 00:17:54.374 "data_size": 7936 00:17:54.374 }, 00:17:54.374 { 00:17:54.374 "name": "BaseBdev2", 00:17:54.374 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:54.374 "is_configured": true, 00:17:54.374 "data_offset": 256, 00:17:54.374 "data_size": 7936 00:17:54.374 } 00:17:54.374 ] 00:17:54.374 }' 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.374 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.633 "name": "raid_bdev1", 00:17:54.633 "uuid": "a864e968-0e9c-454d-9b31-348bb9d947c1", 00:17:54.633 "strip_size_kb": 0, 00:17:54.633 "state": "online", 00:17:54.633 "raid_level": "raid1", 00:17:54.633 "superblock": true, 00:17:54.633 "num_base_bdevs": 2, 00:17:54.633 "num_base_bdevs_discovered": 1, 00:17:54.633 "num_base_bdevs_operational": 1, 00:17:54.633 "base_bdevs_list": [ 00:17:54.633 { 00:17:54.633 "name": null, 00:17:54.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.633 "is_configured": false, 00:17:54.633 "data_offset": 0, 00:17:54.633 "data_size": 7936 00:17:54.633 }, 00:17:54.633 { 00:17:54.633 "name": "BaseBdev2", 00:17:54.633 "uuid": "0f4473a9-78e0-56f2-b301-4b91cbcadf0a", 00:17:54.633 "is_configured": true, 00:17:54.633 "data_offset": 256, 00:17:54.633 "data_size": 7936 00:17:54.633 } 00:17:54.633 ] 00:17:54.633 }' 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:54.633 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 100747 00:17:54.634 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # '[' -z 100747 ']' 00:17:54.634 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # kill -0 100747 00:17:54.634 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # uname 00:17:54.634 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:54.634 15:27:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 100747 00:17:54.893 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:54.893 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:54.893 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@970 -- # echo 'killing process with pid 100747' 00:17:54.893 killing process with pid 100747 00:17:54.893 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@971 -- # kill 100747 00:17:54.893 Received shutdown signal, test time was about 60.000000 seconds 00:17:54.893 00:17:54.894 Latency(us) 00:17:54.894 [2024-11-10T15:27:01.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:54.894 [2024-11-10T15:27:01.257Z] =================================================================================================================== 00:17:54.894 [2024-11-10T15:27:01.257Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:54.894 [2024-11-10 15:27:01.008048] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:54.894 [2024-11-10 15:27:01.008202] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:54.894 [2024-11-10 15:27:01.008248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:54.894 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@976 -- # wait 100747 00:17:54.894 [2024-11-10 15:27:01.008261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:54.894 [2024-11-10 15:27:01.067401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:55.154 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:55.154 00:17:55.154 real 0m16.446s 00:17:55.154 user 0m21.851s 00:17:55.154 sys 0m1.874s 00:17:55.154 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.154 15:27:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.154 ************************************ 00:17:55.154 END TEST raid_rebuild_test_sb_md_interleaved 00:17:55.154 ************************************ 00:17:55.154 15:27:01 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:55.154 15:27:01 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:55.154 15:27:01 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 100747 ']' 00:17:55.154 15:27:01 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 100747 00:17:55.154 15:27:01 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:55.154 ************************************ 00:17:55.154 END TEST bdev_raid 00:17:55.154 ************************************ 00:17:55.154 00:17:55.154 real 10m5.105s 00:17:55.154 user 14m11.076s 00:17:55.154 sys 1m55.006s 00:17:55.154 15:27:01 bdev_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:55.154 15:27:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.414 15:27:01 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:55.414 15:27:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:55.414 15:27:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:55.414 15:27:01 -- common/autotest_common.sh@10 -- # set +x 00:17:55.414 ************************************ 00:17:55.414 START TEST spdkcli_raid 00:17:55.414 ************************************ 00:17:55.414 15:27:01 spdkcli_raid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:55.414 * Looking for test storage... 00:17:55.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:55.414 15:27:01 spdkcli_raid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:55.414 15:27:01 spdkcli_raid -- common/autotest_common.sh@1691 -- # lcov --version 00:17:55.414 15:27:01 spdkcli_raid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:55.414 15:27:01 spdkcli_raid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:55.414 15:27:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:55.674 15:27:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.675 15:27:01 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:55.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.675 --rc genhtml_branch_coverage=1 00:17:55.675 --rc genhtml_function_coverage=1 00:17:55.675 --rc genhtml_legend=1 00:17:55.675 --rc geninfo_all_blocks=1 00:17:55.675 --rc geninfo_unexecuted_blocks=1 00:17:55.675 00:17:55.675 ' 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:55.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.675 --rc genhtml_branch_coverage=1 00:17:55.675 --rc genhtml_function_coverage=1 00:17:55.675 --rc genhtml_legend=1 00:17:55.675 --rc geninfo_all_blocks=1 00:17:55.675 --rc geninfo_unexecuted_blocks=1 00:17:55.675 00:17:55.675 ' 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:55.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.675 --rc genhtml_branch_coverage=1 00:17:55.675 --rc genhtml_function_coverage=1 00:17:55.675 --rc genhtml_legend=1 00:17:55.675 --rc geninfo_all_blocks=1 00:17:55.675 --rc geninfo_unexecuted_blocks=1 00:17:55.675 00:17:55.675 ' 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:55.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.675 --rc genhtml_branch_coverage=1 00:17:55.675 --rc genhtml_function_coverage=1 00:17:55.675 --rc genhtml_legend=1 00:17:55.675 --rc geninfo_all_blocks=1 00:17:55.675 --rc geninfo_unexecuted_blocks=1 00:17:55.675 00:17:55.675 ' 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:55.675 15:27:01 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=101412 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:55.675 15:27:01 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 101412 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@833 -- # '[' -z 101412 ']' 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:55.675 15:27:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:55.675 [2024-11-10 15:27:01.919938] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:17:55.675 [2024-11-10 15:27:01.920157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101412 ] 00:17:55.935 [2024-11-10 15:27:02.054157] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:55.935 [2024-11-10 15:27:02.092060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:55.935 [2024-11-10 15:27:02.135249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.935 [2024-11-10 15:27:02.135338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.507 15:27:02 spdkcli_raid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:56.507 15:27:02 spdkcli_raid -- common/autotest_common.sh@866 -- # return 0 00:17:56.508 15:27:02 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:56.508 15:27:02 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:56.508 15:27:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.508 15:27:02 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:56.508 15:27:02 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:56.508 15:27:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.508 15:27:02 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:56.508 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:56.508 ' 00:17:58.441 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:17:58.441 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:17:58.441 15:27:04 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:17:58.441 15:27:04 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.441 15:27:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.441 15:27:04 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:17:58.441 15:27:04 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:58.441 15:27:04 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.441 15:27:04 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:17:58.441 ' 00:17:59.395 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:17:59.395 15:27:05 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:17:59.395 15:27:05 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.395 15:27:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.395 15:27:05 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:17:59.395 15:27:05 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.395 15:27:05 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.395 15:27:05 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:17:59.395 15:27:05 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:17:59.965 15:27:06 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:17:59.965 15:27:06 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:17:59.965 15:27:06 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:17:59.965 15:27:06 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.965 15:27:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.965 15:27:06 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:17:59.965 15:27:06 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:59.965 15:27:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.965 15:27:06 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:17:59.965 ' 00:18:00.903 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:01.163 15:27:07 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:01.163 15:27:07 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:01.163 15:27:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.163 15:27:07 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:01.163 15:27:07 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.163 15:27:07 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.163 15:27:07 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:01.163 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:01.163 ' 00:18:02.543 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:02.543 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:02.543 15:27:08 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.543 15:27:08 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 101412 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 101412 ']' 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 101412 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@957 -- # uname 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:02.543 15:27:08 spdkcli_raid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101412 00:18:02.803 killing process with pid 101412 00:18:02.803 15:27:08 spdkcli_raid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:02.803 15:27:08 spdkcli_raid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:02.803 15:27:08 spdkcli_raid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101412' 00:18:02.803 15:27:08 spdkcli_raid -- common/autotest_common.sh@971 -- # kill 101412 00:18:02.803 15:27:08 spdkcli_raid -- common/autotest_common.sh@976 -- # wait 101412 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 101412 ']' 00:18:03.373 Process with pid 101412 is not found 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 101412 00:18:03.373 15:27:09 spdkcli_raid -- common/autotest_common.sh@952 -- # '[' -z 101412 ']' 00:18:03.373 15:27:09 spdkcli_raid -- common/autotest_common.sh@956 -- # kill -0 101412 00:18:03.373 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (101412) - No such process 00:18:03.373 15:27:09 spdkcli_raid -- common/autotest_common.sh@979 -- # echo 'Process with pid 101412 is not found' 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:03.373 15:27:09 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:03.373 00:18:03.373 real 0m7.990s 00:18:03.373 user 0m16.652s 00:18:03.373 sys 0m1.264s 00:18:03.373 15:27:09 spdkcli_raid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:03.373 15:27:09 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.373 ************************************ 00:18:03.373 END TEST spdkcli_raid 00:18:03.373 ************************************ 00:18:03.373 15:27:09 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:03.373 15:27:09 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:03.373 15:27:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:03.373 15:27:09 -- common/autotest_common.sh@10 -- # set +x 00:18:03.373 ************************************ 00:18:03.373 START TEST blockdev_raid5f 00:18:03.373 ************************************ 00:18:03.373 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:03.634 * Looking for test storage... 00:18:03.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lcov --version 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:03.634 15:27:09 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:03.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.634 --rc genhtml_branch_coverage=1 00:18:03.634 --rc genhtml_function_coverage=1 00:18:03.634 --rc genhtml_legend=1 00:18:03.634 --rc geninfo_all_blocks=1 00:18:03.634 --rc geninfo_unexecuted_blocks=1 00:18:03.634 00:18:03.634 ' 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:03.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.634 --rc genhtml_branch_coverage=1 00:18:03.634 --rc genhtml_function_coverage=1 00:18:03.634 --rc genhtml_legend=1 00:18:03.634 --rc geninfo_all_blocks=1 00:18:03.634 --rc geninfo_unexecuted_blocks=1 00:18:03.634 00:18:03.634 ' 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:03.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.634 --rc genhtml_branch_coverage=1 00:18:03.634 --rc genhtml_function_coverage=1 00:18:03.634 --rc genhtml_legend=1 00:18:03.634 --rc geninfo_all_blocks=1 00:18:03.634 --rc geninfo_unexecuted_blocks=1 00:18:03.634 00:18:03.634 ' 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:03.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:03.634 --rc genhtml_branch_coverage=1 00:18:03.634 --rc genhtml_function_coverage=1 00:18:03.634 --rc genhtml_legend=1 00:18:03.634 --rc geninfo_all_blocks=1 00:18:03.634 --rc geninfo_unexecuted_blocks=1 00:18:03.634 00:18:03.634 ' 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=101670 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:03.634 15:27:09 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 101670 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@833 -- # '[' -z 101670 ']' 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:03.634 15:27:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:03.634 [2024-11-10 15:27:09.973921] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:03.634 [2024-11-10 15:27:09.974156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101670 ] 00:18:03.894 [2024-11-10 15:27:10.109172] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:03.894 [2024-11-10 15:27:10.148374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.894 [2024-11-10 15:27:10.188377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.465 15:27:10 blockdev_raid5f -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:04.465 15:27:10 blockdev_raid5f -- common/autotest_common.sh@866 -- # return 0 00:18:04.465 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:04.465 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:04.465 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:04.465 15:27:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.465 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.465 Malloc0 00:18:04.465 Malloc1 00:18:04.725 Malloc2 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:04.725 15:27:10 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "996123ab-5cab-42b5-ac1c-c3d0b9e8e64e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "996123ab-5cab-42b5-ac1c-c3d0b9e8e64e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "996123ab-5cab-42b5-ac1c-c3d0b9e8e64e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f5519252-a981-4312-9b67-945743818b90",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a2b3d7e2-e109-4121-ba5c-813c1675faf4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7765f5c9-be5b-4bc0-86ae-6837cadd691e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:04.725 15:27:10 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:04.725 15:27:11 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:04.725 15:27:11 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:04.725 15:27:11 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:04.725 15:27:11 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 101670 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@952 -- # '[' -z 101670 ']' 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@956 -- # kill -0 101670 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@957 -- # uname 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101670 00:18:04.725 killing process with pid 101670 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101670' 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@971 -- # kill 101670 00:18:04.725 15:27:11 blockdev_raid5f -- common/autotest_common.sh@976 -- # wait 101670 00:18:05.666 15:27:11 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:05.666 15:27:11 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:05.666 15:27:11 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:18:05.666 15:27:11 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:05.666 15:27:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:05.666 ************************************ 00:18:05.666 START TEST bdev_hello_world 00:18:05.666 ************************************ 00:18:05.666 15:27:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:05.666 [2024-11-10 15:27:11.834983] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:05.666 [2024-11-10 15:27:11.835128] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101715 ] 00:18:05.666 [2024-11-10 15:27:11.972788] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:05.666 [2024-11-10 15:27:12.010917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.926 [2024-11-10 15:27:12.053806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.186 [2024-11-10 15:27:12.299133] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:06.186 [2024-11-10 15:27:12.299191] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:06.186 [2024-11-10 15:27:12.299223] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:06.186 [2024-11-10 15:27:12.299581] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:06.186 [2024-11-10 15:27:12.299728] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:06.186 [2024-11-10 15:27:12.299748] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:06.186 [2024-11-10 15:27:12.299795] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:06.186 00:18:06.186 [2024-11-10 15:27:12.299821] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:06.446 00:18:06.446 real 0m0.920s 00:18:06.446 user 0m0.501s 00:18:06.446 sys 0m0.312s 00:18:06.446 15:27:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:06.446 ************************************ 00:18:06.446 END TEST bdev_hello_world 00:18:06.446 ************************************ 00:18:06.446 15:27:12 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:06.446 15:27:12 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:06.446 15:27:12 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:06.446 15:27:12 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:06.446 15:27:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:06.446 ************************************ 00:18:06.446 START TEST bdev_bounds 00:18:06.446 ************************************ 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:18:06.446 Process bdevio pid: 101746 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=101746 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 101746' 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 101746 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 101746 ']' 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:06.446 15:27:12 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:06.706 [2024-11-10 15:27:12.836464] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:06.706 [2024-11-10 15:27:12.836697] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101746 ] 00:18:06.706 [2024-11-10 15:27:12.976061] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:06.706 [2024-11-10 15:27:13.011680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:06.706 [2024-11-10 15:27:13.055874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:06.706 [2024-11-10 15:27:13.056060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.706 [2024-11-10 15:27:13.056128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:07.646 I/O targets: 00:18:07.646 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:07.646 00:18:07.646 00:18:07.646 CUnit - A unit testing framework for C - Version 2.1-3 00:18:07.646 http://cunit.sourceforge.net/ 00:18:07.646 00:18:07.646 00:18:07.646 Suite: bdevio tests on: raid5f 00:18:07.646 Test: blockdev write read block ...passed 00:18:07.646 Test: blockdev write zeroes read block ...passed 00:18:07.646 Test: blockdev write zeroes read no split ...passed 00:18:07.646 Test: blockdev write zeroes read split ...passed 00:18:07.646 Test: blockdev write zeroes read split partial ...passed 00:18:07.646 Test: blockdev reset ...passed 00:18:07.646 Test: blockdev write read 8 blocks ...passed 00:18:07.646 Test: blockdev write read size > 128k ...passed 00:18:07.646 Test: blockdev write read invalid size ...passed 00:18:07.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:07.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:07.646 Test: blockdev write read max offset ...passed 00:18:07.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:07.646 Test: blockdev writev readv 8 blocks ...passed 00:18:07.646 Test: blockdev writev readv 30 x 1block ...passed 00:18:07.646 Test: blockdev writev readv block ...passed 00:18:07.646 Test: blockdev writev readv size > 128k ...passed 00:18:07.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:07.646 Test: blockdev comparev and writev ...passed 00:18:07.646 Test: blockdev nvme passthru rw ...passed 00:18:07.646 Test: blockdev nvme passthru vendor specific ...passed 00:18:07.646 Test: blockdev nvme admin passthru ...passed 00:18:07.646 Test: blockdev copy ...passed 00:18:07.646 00:18:07.646 Run Summary: Type Total Ran Passed Failed Inactive 00:18:07.646 suites 1 1 n/a 0 0 00:18:07.646 tests 23 23 23 0 0 00:18:07.646 asserts 130 130 130 0 n/a 00:18:07.646 00:18:07.646 Elapsed time = 0.353 seconds 00:18:07.646 0 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 101746 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 101746 ']' 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 101746 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101746 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101746' 00:18:07.646 killing process with pid 101746 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@971 -- # kill 101746 00:18:07.646 15:27:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@976 -- # wait 101746 00:18:08.216 15:27:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:08.216 00:18:08.216 real 0m1.619s 00:18:08.216 user 0m3.765s 00:18:08.216 sys 0m0.457s 00:18:08.216 15:27:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:08.216 15:27:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:08.216 ************************************ 00:18:08.216 END TEST bdev_bounds 00:18:08.216 ************************************ 00:18:08.216 15:27:14 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:08.216 15:27:14 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:08.216 15:27:14 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:08.216 15:27:14 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:08.216 ************************************ 00:18:08.216 START TEST bdev_nbd 00:18:08.216 ************************************ 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=101789 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 101789 /var/tmp/spdk-nbd.sock 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 101789 ']' 00:18:08.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:08.216 15:27:14 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:08.216 [2024-11-10 15:27:14.549869] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:08.216 [2024-11-10 15:27:14.550102] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.476 [2024-11-10 15:27:14.687402] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:08.476 [2024-11-10 15:27:14.725187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.476 [2024-11-10 15:27:14.764668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:09.046 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.306 1+0 records in 00:18:09.306 1+0 records out 00:18:09.306 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364069 s, 11.3 MB/s 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:09.306 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:09.566 { 00:18:09.566 "nbd_device": "/dev/nbd0", 00:18:09.566 "bdev_name": "raid5f" 00:18:09.566 } 00:18:09.566 ]' 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:09.566 { 00:18:09.566 "nbd_device": "/dev/nbd0", 00:18:09.566 "bdev_name": "raid5f" 00:18:09.566 } 00:18:09.566 ]' 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.566 15:27:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:09.826 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.086 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:10.346 /dev/nbd0 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:10.346 1+0 records in 00:18:10.346 1+0 records out 00:18:10.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003681 s, 11.1 MB/s 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:10.346 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:10.606 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:10.606 { 00:18:10.606 "nbd_device": "/dev/nbd0", 00:18:10.606 "bdev_name": "raid5f" 00:18:10.606 } 00:18:10.606 ]' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:10.607 { 00:18:10.607 "nbd_device": "/dev/nbd0", 00:18:10.607 "bdev_name": "raid5f" 00:18:10.607 } 00:18:10.607 ]' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:10.607 256+0 records in 00:18:10.607 256+0 records out 00:18:10.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433853 s, 242 MB/s 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:10.607 256+0 records in 00:18:10.607 256+0 records out 00:18:10.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255029 s, 41.1 MB/s 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.607 15:27:16 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:10.867 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:11.126 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:11.385 malloc_lvol_verify 00:18:11.385 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:11.644 9f70eae0-5622-4d78-9484-d4fade759f64 00:18:11.644 15:27:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:11.903 4917edad-e349-4f11-a9cf-c61921b3b382 00:18:11.903 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:11.903 /dev/nbd0 00:18:11.903 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:11.903 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:11.903 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:11.903 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:11.903 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:11.903 Discarding device blocks: 0/4096mke2fs 1.47.0 (5-Feb-2023) 00:18:11.903  done 00:18:11.903 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:11.903 00:18:11.903 Allocating group tables: 0/1 done 00:18:11.903 Writing inode tables: 0/1 done 00:18:11.903 Creating journal (1024 blocks): done 00:18:11.904 Writing superblocks and filesystem accounting information: 0/1 done 00:18:11.904 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:11.904 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 101789 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 101789 ']' 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 101789 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 101789 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:12.163 killing process with pid 101789 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 101789' 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@971 -- # kill 101789 00:18:12.163 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@976 -- # wait 101789 00:18:12.732 15:27:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:12.732 00:18:12.732 real 0m4.450s 00:18:12.732 user 0m6.286s 00:18:12.732 sys 0m1.355s 00:18:12.732 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:12.732 15:27:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:12.732 ************************************ 00:18:12.732 END TEST bdev_nbd 00:18:12.732 ************************************ 00:18:12.732 15:27:18 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:12.732 15:27:18 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:12.732 15:27:18 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:12.732 15:27:18 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:12.732 15:27:18 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:12.732 15:27:18 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:12.733 15:27:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:12.733 ************************************ 00:18:12.733 START TEST bdev_fio 00:18:12.733 ************************************ 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:12.733 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:18:12.733 15:27:18 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:18:12.733 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:18:12.733 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:12.993 ************************************ 00:18:12.993 START TEST bdev_fio_rw_verify 00:18:12.993 ************************************ 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:12.993 15:27:19 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:13.253 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:13.253 fio-3.35 00:18:13.253 Starting 1 thread 00:18:25.471 00:18:25.471 job_raid5f: (groupid=0, jobs=1): err= 0: pid=101980: Sun Nov 10 15:27:29 2024 00:18:25.471 read: IOPS=12.0k, BW=47.0MiB/s (49.3MB/s)(470MiB/10001msec) 00:18:25.471 slat (nsec): min=16897, max=81849, avg=19392.20, stdev=2604.97 00:18:25.471 clat (usec): min=12, max=602, avg=133.20, stdev=46.93 00:18:25.471 lat (usec): min=32, max=658, avg=152.59, stdev=47.43 00:18:25.471 clat percentiles (usec): 00:18:25.471 | 50.000th=[ 135], 99.000th=[ 223], 99.900th=[ 273], 99.990th=[ 506], 00:18:25.471 | 99.999th=[ 586] 00:18:25.471 write: IOPS=12.6k, BW=49.3MiB/s (51.7MB/s)(486MiB/9871msec); 0 zone resets 00:18:25.471 slat (usec): min=7, max=285, avg=17.31, stdev= 4.71 00:18:25.471 clat (usec): min=59, max=1935, avg=305.66, stdev=55.97 00:18:25.471 lat (usec): min=74, max=1970, avg=322.97, stdev=57.88 00:18:25.471 clat percentiles (usec): 00:18:25.471 | 50.000th=[ 310], 99.000th=[ 388], 99.900th=[ 979], 99.990th=[ 1647], 00:18:25.471 | 99.999th=[ 1926] 00:18:25.471 bw ( KiB/s): min=46512, max=54680, per=98.72%, avg=49798.32, stdev=2075.84, samples=19 00:18:25.471 iops : min=11628, max=13670, avg=12449.58, stdev=518.96, samples=19 00:18:25.471 lat (usec) : 20=0.01%, 50=0.01%, 100=14.75%, 250=39.73%, 500=45.39% 00:18:25.471 lat (usec) : 750=0.05%, 1000=0.03% 00:18:25.471 lat (msec) : 2=0.05% 00:18:25.471 cpu : usr=98.74%, sys=0.47%, ctx=30, majf=0, minf=12969 00:18:25.471 IO depths : 1=7.7%, 2=19.9%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:25.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.471 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:25.471 issued rwts: total=120331,124477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:25.471 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:25.471 00:18:25.471 Run status group 0 (all jobs): 00:18:25.471 READ: bw=47.0MiB/s (49.3MB/s), 47.0MiB/s-47.0MiB/s (49.3MB/s-49.3MB/s), io=470MiB (493MB), run=10001-10001msec 00:18:25.471 WRITE: bw=49.3MiB/s (51.7MB/s), 49.3MiB/s-49.3MiB/s (51.7MB/s-51.7MB/s), io=486MiB (510MB), run=9871-9871msec 00:18:25.471 ----------------------------------------------------- 00:18:25.471 Suppressions used: 00:18:25.471 count bytes template 00:18:25.472 1 7 /usr/src/fio/parse.c 00:18:25.472 423 40608 /usr/src/fio/iolog.c 00:18:25.472 1 8 libtcmalloc_minimal.so 00:18:25.472 1 904 libcrypto.so 00:18:25.472 ----------------------------------------------------- 00:18:25.472 00:18:25.472 00:18:25.472 real 0m11.406s 00:18:25.472 user 0m11.544s 00:18:25.472 sys 0m0.680s 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 ************************************ 00:18:25.472 END TEST bdev_fio_rw_verify 00:18:25.472 ************************************ 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "996123ab-5cab-42b5-ac1c-c3d0b9e8e64e"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "996123ab-5cab-42b5-ac1c-c3d0b9e8e64e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "996123ab-5cab-42b5-ac1c-c3d0b9e8e64e",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "f5519252-a981-4312-9b67-945743818b90",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "a2b3d7e2-e109-4121-ba5c-813c1675faf4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "7765f5c9-be5b-4bc0-86ae-6837cadd691e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:25.472 /home/vagrant/spdk_repo/spdk 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:25.472 00:18:25.472 real 0m11.706s 00:18:25.472 user 0m11.673s 00:18:25.472 sys 0m0.823s 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:25.472 15:27:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 ************************************ 00:18:25.472 END TEST bdev_fio 00:18:25.472 ************************************ 00:18:25.472 15:27:30 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:25.472 15:27:30 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:25.472 15:27:30 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:18:25.472 15:27:30 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:25.472 15:27:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:25.472 ************************************ 00:18:25.472 START TEST bdev_verify 00:18:25.472 ************************************ 00:18:25.472 15:27:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:25.472 [2024-11-10 15:27:30.839678] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:25.472 [2024-11-10 15:27:30.839811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102132 ] 00:18:25.472 [2024-11-10 15:27:30.974290] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:25.472 [2024-11-10 15:27:31.011829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:25.472 [2024-11-10 15:27:31.060992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.472 [2024-11-10 15:27:31.061143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.472 Running I/O for 5 seconds... 00:18:26.980 11070.00 IOPS, 43.24 MiB/s [2024-11-10T15:27:34.725Z] 11181.00 IOPS, 43.68 MiB/s [2024-11-10T15:27:35.664Z] 11185.67 IOPS, 43.69 MiB/s [2024-11-10T15:27:36.604Z] 11212.25 IOPS, 43.80 MiB/s [2024-11-10T15:27:36.604Z] 11186.20 IOPS, 43.70 MiB/s 00:18:30.241 Latency(us) 00:18:30.241 [2024-11-10T15:27:36.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.241 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:30.241 Verification LBA range: start 0x0 length 0x2000 00:18:30.241 raid5f : 5.02 6756.00 26.39 0.00 0.00 28456.99 273.11 21477.85 00:18:30.241 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.241 Verification LBA range: start 0x2000 length 0x2000 00:18:30.241 raid5f : 5.02 4437.01 17.33 0.00 0.00 43186.87 160.66 30845.85 00:18:30.241 [2024-11-10T15:27:36.604Z] =================================================================================================================== 00:18:30.241 [2024-11-10T15:27:36.604Z] Total : 11193.01 43.72 0.00 0.00 34296.30 160.66 30845.85 00:18:30.501 00:18:30.501 real 0m5.948s 00:18:30.501 user 0m10.979s 00:18:30.501 sys 0m0.327s 00:18:30.501 15:27:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:30.501 15:27:36 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:30.501 ************************************ 00:18:30.501 END TEST bdev_verify 00:18:30.501 ************************************ 00:18:30.501 15:27:36 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:30.501 15:27:36 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:18:30.501 15:27:36 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:30.501 15:27:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:30.501 ************************************ 00:18:30.501 START TEST bdev_verify_big_io 00:18:30.501 ************************************ 00:18:30.501 15:27:36 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:30.762 [2024-11-10 15:27:36.867293] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:30.762 [2024-11-10 15:27:36.867430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102214 ] 00:18:30.762 [2024-11-10 15:27:37.003358] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:30.762 [2024-11-10 15:27:37.042221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:30.762 [2024-11-10 15:27:37.089447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.762 [2024-11-10 15:27:37.089564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.022 Running I/O for 5 seconds... 00:18:33.398 633.00 IOPS, 39.56 MiB/s [2024-11-10T15:27:40.700Z] 761.00 IOPS, 47.56 MiB/s [2024-11-10T15:27:41.640Z] 803.00 IOPS, 50.19 MiB/s [2024-11-10T15:27:42.579Z] 793.25 IOPS, 49.58 MiB/s [2024-11-10T15:27:42.840Z] 812.40 IOPS, 50.77 MiB/s 00:18:36.477 Latency(us) 00:18:36.477 [2024-11-10T15:27:42.840Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:36.477 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:36.477 Verification LBA range: start 0x0 length 0x200 00:18:36.477 raid5f : 5.18 466.28 29.14 0.00 0.00 6860531.94 197.25 294292.23 00:18:36.477 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:36.477 Verification LBA range: start 0x200 length 0x200 00:18:36.477 raid5f : 5.26 362.29 22.64 0.00 0.00 8743166.31 203.50 372892.01 00:18:36.477 [2024-11-10T15:27:42.840Z] =================================================================================================================== 00:18:36.477 [2024-11-10T15:27:42.840Z] Total : 828.57 51.79 0.00 0.00 7691105.92 197.25 372892.01 00:18:36.737 00:18:36.737 real 0m6.200s 00:18:36.737 user 0m11.464s 00:18:36.737 sys 0m0.336s 00:18:36.737 15:27:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:36.737 15:27:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:36.737 ************************************ 00:18:36.737 END TEST bdev_verify_big_io 00:18:36.737 ************************************ 00:18:36.737 15:27:43 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:36.737 15:27:43 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:18:36.737 15:27:43 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:36.737 15:27:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:36.737 ************************************ 00:18:36.737 START TEST bdev_write_zeroes 00:18:36.737 ************************************ 00:18:36.737 15:27:43 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:36.997 [2024-11-10 15:27:43.139849] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:36.997 [2024-11-10 15:27:43.139969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102301 ] 00:18:36.997 [2024-11-10 15:27:43.273274] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:36.997 [2024-11-10 15:27:43.313155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.997 [2024-11-10 15:27:43.358118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.256 Running I/O for 1 seconds... 00:18:38.637 30087.00 IOPS, 117.53 MiB/s 00:18:38.637 Latency(us) 00:18:38.637 [2024-11-10T15:27:45.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.637 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:38.637 raid5f : 1.01 30054.69 117.40 0.00 0.00 4244.51 1420.91 5940.68 00:18:38.637 [2024-11-10T15:27:45.000Z] =================================================================================================================== 00:18:38.637 [2024-11-10T15:27:45.000Z] Total : 30054.69 117.40 0.00 0.00 4244.51 1420.91 5940.68 00:18:38.637 00:18:38.637 real 0m1.919s 00:18:38.637 user 0m1.515s 00:18:38.637 sys 0m0.291s 00:18:38.637 15:27:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:38.637 15:27:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:38.637 ************************************ 00:18:38.637 END TEST bdev_write_zeroes 00:18:38.637 ************************************ 00:18:38.897 15:27:45 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:38.897 15:27:45 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:18:38.897 15:27:45 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:38.897 15:27:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:38.897 ************************************ 00:18:38.897 START TEST bdev_json_nonenclosed 00:18:38.897 ************************************ 00:18:38.897 15:27:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:38.897 [2024-11-10 15:27:45.138283] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:38.897 [2024-11-10 15:27:45.138405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102342 ] 00:18:39.157 [2024-11-10 15:27:45.275147] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:39.157 [2024-11-10 15:27:45.314586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.157 [2024-11-10 15:27:45.356850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.157 [2024-11-10 15:27:45.356955] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:39.157 [2024-11-10 15:27:45.356983] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:39.157 [2024-11-10 15:27:45.356993] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:39.157 00:18:39.157 real 0m0.417s 00:18:39.157 user 0m0.170s 00:18:39.157 sys 0m0.144s 00:18:39.157 15:27:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:39.157 15:27:45 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:39.157 ************************************ 00:18:39.157 END TEST bdev_json_nonenclosed 00:18:39.157 ************************************ 00:18:39.417 15:27:45 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:39.417 15:27:45 blockdev_raid5f -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:18:39.417 15:27:45 blockdev_raid5f -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:39.417 15:27:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:39.417 ************************************ 00:18:39.417 START TEST bdev_json_nonarray 00:18:39.417 ************************************ 00:18:39.417 15:27:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:39.417 [2024-11-10 15:27:45.623253] Starting SPDK v25.01-pre git sha1 06bc8ce53 / DPDK 24.11.0-rc1 initialization... 00:18:39.417 [2024-11-10 15:27:45.623367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102369 ] 00:18:39.417 [2024-11-10 15:27:45.755971] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc1 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:39.677 [2024-11-10 15:27:45.793319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.677 [2024-11-10 15:27:45.837599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.677 [2024-11-10 15:27:45.837713] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:39.677 [2024-11-10 15:27:45.837734] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:39.677 [2024-11-10 15:27:45.837745] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:39.677 00:18:39.677 real 0m0.410s 00:18:39.677 user 0m0.171s 00:18:39.677 sys 0m0.135s 00:18:39.677 15:27:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:39.677 15:27:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:39.677 ************************************ 00:18:39.677 END TEST bdev_json_nonarray 00:18:39.677 ************************************ 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:39.677 15:27:46 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:39.677 00:18:39.677 real 0m36.403s 00:18:39.677 user 0m48.597s 00:18:39.677 sys 0m5.404s 00:18:39.677 15:27:46 blockdev_raid5f -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:39.677 15:27:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:39.677 ************************************ 00:18:39.677 END TEST blockdev_raid5f 00:18:39.677 ************************************ 00:18:39.937 15:27:46 -- spdk/autotest.sh@194 -- # uname -s 00:18:39.937 15:27:46 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@256 -- # timing_exit lib 00:18:39.937 15:27:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:39.937 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:39.937 15:27:46 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:39.937 15:27:46 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:18:39.937 15:27:46 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:18:39.937 15:27:46 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:18:39.937 15:27:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:39.937 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:39.937 15:27:46 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:18:39.937 15:27:46 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:18:39.937 15:27:46 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:18:39.937 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:18:42.478 INFO: APP EXITING 00:18:42.478 INFO: killing all VMs 00:18:42.478 INFO: killing vhost app 00:18:42.478 INFO: EXIT DONE 00:18:42.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:42.738 Waiting for block devices as requested 00:18:42.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:18:42.738 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:18:43.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:43.677 Cleaning 00:18:43.677 Removing: /var/run/dpdk/spdk0/config 00:18:43.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:18:43.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:18:43.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:18:43.937 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:18:43.937 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:18:43.937 Removing: /var/run/dpdk/spdk0/hugepage_info 00:18:43.937 Removing: /dev/shm/spdk_tgt_trace.pid70418 00:18:43.937 Removing: /var/run/dpdk/spdk0 00:18:43.937 Removing: /var/run/dpdk/spdk_pid100428 00:18:43.937 Removing: /var/run/dpdk/spdk_pid100747 00:18:43.937 Removing: /var/run/dpdk/spdk_pid101412 00:18:43.937 Removing: /var/run/dpdk/spdk_pid101670 00:18:43.937 Removing: /var/run/dpdk/spdk_pid101715 00:18:43.937 Removing: /var/run/dpdk/spdk_pid101746 00:18:43.937 Removing: /var/run/dpdk/spdk_pid101970 00:18:43.937 Removing: /var/run/dpdk/spdk_pid102132 00:18:43.937 Removing: /var/run/dpdk/spdk_pid102214 00:18:43.937 Removing: /var/run/dpdk/spdk_pid102301 00:18:43.937 Removing: /var/run/dpdk/spdk_pid102342 00:18:43.937 Removing: /var/run/dpdk/spdk_pid102369 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70243 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70418 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70625 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70718 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70746 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70858 00:18:43.937 Removing: /var/run/dpdk/spdk_pid70876 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71064 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71144 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71229 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71329 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71415 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71454 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71491 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71556 00:18:43.937 Removing: /var/run/dpdk/spdk_pid71673 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72108 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72156 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72207 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72219 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72293 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72304 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72386 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72402 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72444 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72462 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72515 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72532 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72660 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72702 00:18:43.937 Removing: /var/run/dpdk/spdk_pid72783 00:18:43.937 Removing: /var/run/dpdk/spdk_pid73968 00:18:44.197 Removing: /var/run/dpdk/spdk_pid74174 00:18:44.197 Removing: /var/run/dpdk/spdk_pid74303 00:18:44.197 Removing: /var/run/dpdk/spdk_pid74908 00:18:44.197 Removing: /var/run/dpdk/spdk_pid75108 00:18:44.197 Removing: /var/run/dpdk/spdk_pid75237 00:18:44.197 Removing: /var/run/dpdk/spdk_pid75842 00:18:44.197 Removing: /var/run/dpdk/spdk_pid76161 00:18:44.197 Removing: /var/run/dpdk/spdk_pid76290 00:18:44.197 Removing: /var/run/dpdk/spdk_pid77620 00:18:44.197 Removing: /var/run/dpdk/spdk_pid77862 00:18:44.197 Removing: /var/run/dpdk/spdk_pid77997 00:18:44.197 Removing: /var/run/dpdk/spdk_pid79332 00:18:44.197 Removing: /var/run/dpdk/spdk_pid79575 00:18:44.197 Removing: /var/run/dpdk/spdk_pid79710 00:18:44.197 Removing: /var/run/dpdk/spdk_pid81045 00:18:44.197 Removing: /var/run/dpdk/spdk_pid81480 00:18:44.197 Removing: /var/run/dpdk/spdk_pid81609 00:18:44.197 Removing: /var/run/dpdk/spdk_pid83040 00:18:44.197 Removing: /var/run/dpdk/spdk_pid83288 00:18:44.197 Removing: /var/run/dpdk/spdk_pid83417 00:18:44.197 Removing: /var/run/dpdk/spdk_pid84848 00:18:44.197 Removing: /var/run/dpdk/spdk_pid85103 00:18:44.197 Removing: /var/run/dpdk/spdk_pid85238 00:18:44.197 Removing: /var/run/dpdk/spdk_pid86668 00:18:44.197 Removing: /var/run/dpdk/spdk_pid87146 00:18:44.197 Removing: /var/run/dpdk/spdk_pid87275 00:18:44.197 Removing: /var/run/dpdk/spdk_pid87408 00:18:44.197 Removing: /var/run/dpdk/spdk_pid87820 00:18:44.197 Removing: /var/run/dpdk/spdk_pid88536 00:18:44.197 Removing: /var/run/dpdk/spdk_pid88895 00:18:44.197 Removing: /var/run/dpdk/spdk_pid89574 00:18:44.197 Removing: /var/run/dpdk/spdk_pid90002 00:18:44.197 Removing: /var/run/dpdk/spdk_pid90739 00:18:44.197 Removing: /var/run/dpdk/spdk_pid91133 00:18:44.197 Removing: /var/run/dpdk/spdk_pid93051 00:18:44.197 Removing: /var/run/dpdk/spdk_pid93479 00:18:44.197 Removing: /var/run/dpdk/spdk_pid93907 00:18:44.197 Removing: /var/run/dpdk/spdk_pid95951 00:18:44.197 Removing: /var/run/dpdk/spdk_pid96425 00:18:44.197 Removing: /var/run/dpdk/spdk_pid96917 00:18:44.197 Removing: /var/run/dpdk/spdk_pid97956 00:18:44.197 Removing: /var/run/dpdk/spdk_pid98263 00:18:44.197 Removing: /var/run/dpdk/spdk_pid99185 00:18:44.197 Removing: /var/run/dpdk/spdk_pid99501 00:18:44.197 Clean 00:18:44.456 15:27:50 -- common/autotest_common.sh@1451 -- # return 0 00:18:44.456 15:27:50 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:18:44.456 15:27:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.456 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 15:27:50 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:18:44.456 15:27:50 -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:44.456 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:18:44.456 15:27:50 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:18:44.456 15:27:50 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:18:44.456 15:27:50 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:18:44.456 15:27:50 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:18:44.456 15:27:50 -- spdk/autotest.sh@394 -- # hostname 00:18:44.456 15:27:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:18:44.716 geninfo: WARNING: invalid characters removed from testname! 00:19:11.281 15:28:14 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:11.281 15:28:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:13.190 15:28:19 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:15.099 15:28:21 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:17.006 15:28:23 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:19.543 15:28:25 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:22.083 15:28:27 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:22.083 15:28:27 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:22.083 15:28:27 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:22.083 15:28:27 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:22.083 15:28:27 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:22.083 15:28:27 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:22.083 + [[ -n 6156 ]] 00:19:22.083 + sudo kill 6156 00:19:22.094 [Pipeline] } 00:19:22.110 [Pipeline] // timeout 00:19:22.115 [Pipeline] } 00:19:22.131 [Pipeline] // stage 00:19:22.137 [Pipeline] } 00:19:22.151 [Pipeline] // catchError 00:19:22.160 [Pipeline] stage 00:19:22.163 [Pipeline] { (Stop VM) 00:19:22.175 [Pipeline] sh 00:19:22.460 + vagrant halt 00:19:25.025 ==> default: Halting domain... 00:19:33.169 [Pipeline] sh 00:19:33.469 + vagrant destroy -f 00:19:36.010 ==> default: Removing domain... 00:19:36.022 [Pipeline] sh 00:19:36.306 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:36.317 [Pipeline] } 00:19:36.328 [Pipeline] // stage 00:19:36.333 [Pipeline] } 00:19:36.345 [Pipeline] // dir 00:19:36.351 [Pipeline] } 00:19:36.363 [Pipeline] // wrap 00:19:36.369 [Pipeline] } 00:19:36.382 [Pipeline] // catchError 00:19:36.391 [Pipeline] stage 00:19:36.394 [Pipeline] { (Epilogue) 00:19:36.407 [Pipeline] sh 00:19:36.692 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:40.906 [Pipeline] catchError 00:19:40.908 [Pipeline] { 00:19:40.920 [Pipeline] sh 00:19:41.207 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:41.207 Artifacts sizes are good 00:19:41.217 [Pipeline] } 00:19:41.230 [Pipeline] // catchError 00:19:41.242 [Pipeline] archiveArtifacts 00:19:41.249 Archiving artifacts 00:19:41.377 [Pipeline] cleanWs 00:19:41.393 [WS-CLEANUP] Deleting project workspace... 00:19:41.393 [WS-CLEANUP] Deferred wipeout is used... 00:19:41.400 [WS-CLEANUP] done 00:19:41.402 [Pipeline] } 00:19:41.417 [Pipeline] // stage 00:19:41.422 [Pipeline] } 00:19:41.436 [Pipeline] // node 00:19:41.441 [Pipeline] End of Pipeline 00:19:41.487 Finished: SUCCESS